Category: AI & Machine Learning

December 19th, 2024 by Jj0hnnyj

I spoke on a Docent Institute webinar last night co-hosted by Iowa Illinois IEEE Section – R4 and IEEE Computer Society (Iowa-Illinois Chapter). The title of the talk is, “Navigating Cybersecurity and Systemic Risk in a Rapidly Advancing Technological Landscape”where I discussed #cybersecurity and #systemic_risk. Thanks to Binto George & Michael Umakor, who were co-hosts. Check out the video! You may find the video and slides here:

Posted in AI & Machine Learning, Autonomous Vehicles, Blockchain, Cloud, Critical Infrastructure, Cryptography, Cyberinsurance, Cybersecurity & Infosec, Emerging Technology, Ethics, Future Views, IoT, IIoT, ICS-SCADA, Law and Regulations, Presentations & Webinars, Privacy, Quantum Computing, Resiliency, Systemic Risk

April 5th, 2019 by John

Posted in AI & Machine Learning, Cybersecurity & Infosec, Emerging Technology, IoT, IIoT, ICS-SCADA

December 16th, 2018 by John

Posted in AI & Machine Learning, Ethics

November 7th, 2018 by John

Seeking 12 Innovative AI, Blockchain or IoT Solutions for Agriculture – RSVP by 21 Nov

The IEEE Blockchain for Agriculture Forum is seeking technologists to “wow” the
agriculture industry with Lightning Rounds Sessions at its upcoming “Roadmap to
Adoption” forum which will take place on 28-29 November 2018 in Honolulu, Hawaii.
The objective of this Forum is to design the roadmap to blockchain adoption. The forum
does not focus on why blockchain in combination with AI and IoT is right for the
agriculture value chain, but rather on how we will drive effective adoption of these
technologies. Learn more about the event at: https://blockchain.ieee.org/standards/agriforum18

If you have a POC or solution using IoTs, artificial intelligence, or blockchain that will
help optimize and secure the agriculture value chain while also enhancing consumer
safety and knowledge about the food they eat, then please submit your idea.

LIGHTNING ROUND SESSION FORMAT

• Presentations are limited to 15 minutes
• An overview of the POC or solution with the use of IoT, artificial intelligence (AI)
or blockchain in the agriculture value chain from the seed in the ground to the
food on the table.
• Presentations should be business technical in nature and focus on the
application and solution.

SUBMISSION GUIDELINES

There is no fee to participate. We are accepting 12 submissions based on relevancy
and first-come basis.

Deadline for submissions is Wednesday 21 November 2018. (Proposals received
after this date may be considered if there is still space available for participation.)

Your submission must include a one-page (maximum) synopsis on the POC application,
and the challenge it is trying to solve within the agriculture industry. If you have
available design graphics and whitepapers on your application, please include them in
your submission.

All submissions should be sent to Maria Palombini, forum project leader,
m.palombini@ieee.org.

Presenters will be notified of acceptance within 48 hours. Accepted submissions will
include full complimentary registration to the Forum.

Posted in AI & Machine Learning, Autonomous Vehicles, Blockchain, Events, IoT, IIoT, ICS-SCADA

September 22nd, 2018 by John

//cdn.iframe.ly/embed.js

Posted in AI & Machine Learning, Emerging Technology, Future Views

April 12th, 2018 by John

Artificial intelligence (AI) and machine learning (ML) have been increasingly used as product marketing buzzwords. Nevertheless, I feel certain that AI is an integral part of how we approach cybersecurity in the future, with particular applications for securing manufacturing environments.

See John’s earlier article on Securing the Industrial Internet of Things.

Machine learning is a functional subset of AI. With large data sets, and expert training, machine learning can help to identify cybersecurity threats in the same way your GPS app can identify the best route home, during rush hour. In the cases where ML has proven its ability to accurately distinguish known and unknown threats across the stages of the kill chain, it can help you to detect and respond to threats faster and better. ML can provide meaningful insights and even actionable information to cybersecurity analyst, to aid in incident triage and reduce the impact of incidents on your organization.

The manufacturing floor in a factory is often filled with special purpose equipment, legacy systems and industrial Internet of things (IIoT) devices. Automation is leveraged for moving parts down the production line. Robots cut metal, assemble parts and spray paint. As opposed to the office environment, which is more likely to follow IT standards for the configuration of computers and printers and applications, the manufacturing environment is usually less well understood and less easily managed, due to the variety of systems in use.

In the manufacturing environment, specialized IIoT devices may communicate using special, non-IT protocols. Compared to office computers, they will often lack the security, management and ability to be patched. If they can be patched, it may require local access. These devices may not have unique passwords, or they may require passwords be changed manually with special equipment. Often times, a factory will not have a detailed inventory of all manufacturing assets.

In addition to the variety and manageability of manufacturing systems, they may sometimes have special requirements. They may require unauthenticated Internet access, or require support by external vendors. They may run software which requires older and sometimes unsupported OS versions. They may be specialized or embedded systems, without the ability to run requisite desktop security software. They may be turned off for months at a time, or they may reboot every time a piece of equipment is turned on and not have persistent memory. Another factory consideration is often the agreed upon rules for wage employees using manufacturing systems and software. Production line computers may be forced to use shared passwords or rely on auto-logon.

The factory environment can be quite different than the office environment, however, a manufacturing company cannot afford for their production line to go down. They need to manage the environment, and still prevent viruses, attacks and misconfigurations which could stop production. Of course, the CISO needs to protect the entire large, complex enterprise network, of which the factory is just a subset.

There are two ways to do this. The first is to segment the network, to allow legitimate traffic to flow, but to isolate systems so a specialized or legacy system that is attacked will not affect other factory or corporate systems. When you are dealing with a large, legacy network (with perhaps many other factories around the world), that is easier said than done.

You may be able to implement policies (often manually configured) and IP segmentation, to reduce the risk of abuse (e.g. malicious USB, downloads from Internet). Network segmentation with firewalls is probably untenable, so another approach relying on the ability to monitor, detect and respond may be the most effective. This relies on the ability to collect network data, establish what is normal traffic and detect inappropriate or malicious behavior and misconfigurations. [pullquote]Visibility to existing data sources, such as Netflow, proxy and DNS can be used to identify attacks that would otherwise fly under the radar, or be lost in the noise.[/pullquote]

Anomaly detection alone can be very noisy. Potentially devastating threats could fly under the radar. It seems the best way to deal with a diverse environment and unique and unexpected traffic is to additionally leverage machine learning to look for the specific East-West traffic indicative of malicious actors, viruses and misconfigurations which could bring your production line to a grinding halt. Along with automation and orchestration, the impact can be minimized and the extent of an attack can be presented to more quickly quarantine systems and recover.

This solution needs to be scalable, across factories and across the large and complex enterprise network. This means the ability to aggregate and analyze network data across silos is a much more scalable and affordable solution, compared to purchasing and deploying hardware.

While there is no doubt that many CISOs will roll their eyes when they hear certain buzzwords, the manufacturing environment is unique and would benefit from an AI solution to monitor logs and traffic, and uncover malicious activity which might otherwise be lost in the noise. When you are not able to simply segment and isolate systems, the ability to intelligently monitor their communications and provide detailed, actionable information to security analysts seems the best approach to securing the manufacturing environment.

Posted in AI & Machine Learning, Blog

April 9th, 2018 by John

One of the most frequently maligned and misused terms by information security vendors is: Artificial Intelligence (AI). AI and machine learning (ML) are, however, not a passing fad. Let me define these terms in more details.

Artificial intelligence has been the subject of sci-fi novels for several decades now. Computer systems with “brains” to rival or surpass that of humans. The idea that AI systems are “thinking machines” that in any way rival the human brain, is fiction. At least in the near future.

Expert systems, are rules based decision trees, which utilize the decision-making capabilities of experts. For example, a help desk manager might develop a flow chart for troubleshooting a customer’s computer problems over the phone. “Is your computer plugged in?” “Are there any error codes?” Expert systems codify a person’s expertise, so that others or even computer automation may be used without an expert present. Expert systems follow very specific rules without exception.

An expert system has the shortcoming that it is only able to make decisions based on predetermined alternatives. If faced with an unexpected response, it doesn’t know what to do. An applied AI system has some ability to make decisions when uncertainty or new situations occur. Think of the brains behind a self-driving car (autonomous vehicle) – it is able to identify and classify objects (human vs. chicken crossing the road), it may recognize language or the mood of a driver by their facial expressions, as well as making predictions and taking autonomous action to simultaneously follow the rules of the road, optimize the comfort of the passengers, maximize fuel economy and avoid hitting other cars and objects. A self-driving car is an example of Generalized (broad) AI, making many decisions based on various inputs. It utilizes machine learning, to learn from experience and improve. AI is adaptive, compared to the rigid rule base of an expert system.

Machine learning is improved when you have more extensive data sets, and when you have realistic scenarios and experts to help in the learning process. A Narrow AI system might be designed to only distinguish between objects. “Which is a person, or a chicken, or a toaster?” The machine learning process might involve many photos from the Internet and human assistance to distinguish between the objects, until the training is adequate to allow the AI system to classify objects on its own. This is precisely what search engines do to classify images, and present you with pictures of a “red ball” when you perform such a search.

An example of a more complex AI system might be a GPS application (brain in the cloud) that can tell you in real-time, the fastest route to take home from work, during rush hour. You can imagine the many inputs and variables necessary to make realtime predictions which are pretty accurate. While the physics of moving traffic is known, your GPS is still unable to predict when humans will make decisions leading to accidents that will ensnarl traffic and force you onto a secondary route.

Deep learning is a subset of machine learning that uses algorithms and artificial neural networks to mimic the structure of the brain. As opposed to a single layer of machine learning, it has multiple layers. To return to our image recognition example, rather than learning objects by taking data from a photo and asking, “Is this a red ball? Y/N”, deep learning may have layers that look at edges, shapes, textures, colors, and so on. Deep learning requires much more data and more intensive training than single-layer machine learning, but may provide nuanced answers needed for the successful creation of highly complex, independent AI systems in the future.

Figure 1 Relationship between AI, ML and DL.

If we think that auto-pilot in airplanes, autonomous vehicles and predicting the time it will take you to get home in rush hour traffic are valid uses of AI, why would we hesitate to look at benefits AI can bring to cybersecurity? It only makes sense to consider the value of getting “intelligence” from lower in the stack and rolling it up to something actionable, to put in front of a human (security analyst).

Large, diverse and complex network environments can be rife with misconfigurations and vulnerable to attacks and insider threats. There are many legacy applications, non-compliant systems, IoT, OT and rogue devices on a large enterprise network. Consider in addition the difficulty in monitoring in a cloud deployment. There are threats we can identify because we “know we know” them, or those we “know we don’t know”, and as the threat landscape grows at an exponential rate, there will be more and more that fall into the third category: “what we don’t know we don’t know”.

Just as the threats are growing, the impact to factory systems, critical infrastructure, medical equipment/personal health devices, and our own safety and privacy are growing. It is projected we will have maybe 30 billion IoT devices connected to the Internet in the next two years. Most IoT devices come with default passwords, with no secure update process and other undiscovered vulnerabiliites. In other words, nothing that we connect to any network is truly secure, and every insecure thing puts every other insecure thing at risk. We can’t protect everything equally well (nor should we), and so everything puts everything at risk without some method of segmentation/isolation and monitoring to shorten the time it takes to detect and stop and recover from an attack. If well trained, AI is well suited to help us identify vulnerabilities and detect attacks, in this complex system of systems we have created.

Without the benefits of AI, with validated models and expert training, I will claim that it will be impossible to keep up with the volume, velocity and voracity of network, logging and other diverse security big data. The data needs to be in the right format, before a SIEM can ingest it. Feeding it into a SIEM without any machine learning, relies only on custom written triggers. In other words, you write scripts to catch the confluence of the attack patterns you “know you know”. The number of SIEM rules (which adds overhead), the glut of SIEM events and alerts, only motivates hiring more and more SOC analysts. Meanwhile, the signal to noise ratio is high, and out of all the daily (hundreds? thousands?) alerts that need to be tracked down by humans, only a few will matter. The SOC model of pulling low-level data together to stitch it into something useful at the high-level SIEM is broken – it won’t scale. What is needed is more intelligence in the various systems and at a lower level in the stack, therefore only the meaningful events boil up to be inspected. AI can play an important role in solving this problem, and solving the projected shortage of SOC analysts in the coming years. Let AI detect, correlate and report actionable intelligence to the security analyst. Let people do what they do best, and leverage AI where it adds value.

Just as the traditional SIEM model is broken, behavioral or anomaly detection that is only looking for variances from an established baseline is inadequate. Analysts don’t have time to look at all the “anomalous behavior”. The signal to noise ratio is too high. AI can augment behavioral/anomaly detection, to deliver what is meaningful for the analyst to investigate. It can be a force multiplier.

Is this perfect? Nope. Not by a long shot, but it is the direction we should be working. As our systems get “smarter”, train them to do what machines can do well. Don’t believe the vendors, ask them to prove it. Ask for value added use cases from other happy customers. Don’t buy vaporware or empty promises. But, don’t slam the door in their face, either. Read More

Posted in AI & Machine Learning, Blog

January 2nd, 2018 by John

//cdn.iframe.ly/embed.js

 

Posted in AI & Machine Learning, Cool-Stuff, Cybersecurity & Infosec, Vendors & Product Reviews