What is Artificial Intelligence?

Industry

Reading Time: 5 minutes

Before we begin, let’s get definitions straight — what is artificial intelligence (AI)? Or more specifically, what isn’t it? As you might expect, ask any technical person for a definition of something, and you’ll get a dozen different responses, and at least half of them will be, “It depends.”

In my humble opinion, AI is a machine or program that learns from its experience and adapts its behavior accordingly.

As much as I love chatbots and voice interfaces, this excludes most of them, as they tend to have a static selection of knowledge and interactions. Their developers may learn from application logs and change the application over time, but that’s not the same. Current popular and widespread applications of AI include game intelligences (playing or running), self-driving cars, and object manipulation and perception.

AI is not new; I even took a handful of AI modules at university way back in the distant past of 2002, and it’s history stretches back much further than that. While techniques have existed for some time, recent demand and plunging hardware prices have created a surge of activity, interest, projects, and applications.

Many other technologies come under the general AI umbrella that you have likely heard about, including machine learning for training an intelligence, and a variety of technologies for processing inputs and outputs (language processing, image recognition, etc.). AI is a big topic to try to summarize, and in the words that follow, I am simplifying as much as possible, so bear that in mind. Okay, here we go!

Machine Learning

Think of machine learning as (somewhat) similar to humans going to school. It’s when your application gains the initial knowledge it needs to be useful and builds upon as it learns from experience. Much like humans at school, it has the ability to make judgments and act, dependent on what it learns and how you taught it.

If you give an application a disconnected, limited, or biased information set, then it may not be as effective. Getting the balance right for training an AI is hard, and (more than normal in the tech space), really depends on your use case. If you are creating an AI for a niche use case, with a smaller data set available, then it won’t need as much training as an AI for a broader purpose, or one where you can potentially pull from a lot of data sources.

Deep Learning and Neural Networks

A subset of machine learning, deep learning focuses on attempting to replicate the way the human brain works. That’s a mysterious statement, as we don’t even fully understand how the human brain works; it’s more that now computing power allows us to emulate better how we perceive the human brain works, and how it learns from experience.

Neural networks are the method behind deep learning, multiple nodes that undertake tasks by considering examples and then sharing their experiences with other nodes in their network. If one node learns what success with a task equals, they can share that with all the other nodes, and move onto new experiments.

Cognitive Computing

If you thought the definitions so far were broad, then prepare to be surprised, as there is no agreed definition of cognitive computing, so I will attempt my own.

If neural networks attempt to simulate the brain, then I think cognitive computing helps enhance that “brain” with useful sensory information. This information includes textual and aural language, images, heat, spatial awareness, and more. The stream of extra data supplied to the network helps it adapt and respond to changes, making decisions based upon them.

Computer Vision

While some may now consider it a part of cognitive computing, computer vision is more established with a longer history; I even remember image recognition being one of my favorite units at university.

In an AI context, “vision” also includes images we aren’t used to seeing, as machines can also process other types of visual input that we can’t, such as x-ray or infrared.

Natural Language Processing (NLP)

As a writer, NLP is the aspect of AI most interesting to me. Again, it’s not a new discipline, but recent advances have pushed it further forward. NLP deals with interpreting written or spoken human language to understand its content, context, and intent, as well as responding to a human appropriately based on what its learned.

Tools and Libraries

When it comes to tool and library recommendations, there are some specific to each section and others that span multiple categories. A lot of libraries in this field are aimed at Python developers, but I’ll try to include a few that support other languages, too.

Major cloud providers

IBM is pushing hard on their already infamous AI tools and platform; Watson has a service and library to suit most of the use cases above, often with an option to self-host or run in their cloud.

Unsurprisingly, all the leading cloud providers have their offerings. Here’s what Google has to offer (which includes the popular TensorFlow), Azure here and here, and finally, AWS here and here.

There are also plenty of self-installed open-source options, and a quick internet search results in dozens of options. Here’s a small selection of the common favorites.

  • Keras, a high-level neural networks Python library that can sit on top of other deep-learning libraries aimed at making experimentation with models easier.
  • MXNet, a new but already popular deep-learning library that supports multiple programming languages and deployment methods.
  • Deeplearning4j is a JVM-based deep-learning library that also has an enterprise-friendly offering with built-in visual notebooks for experimentation.
  • Spark MLib, if you are already using Spark for data streaming, then this additional library helps you do more with that data.
  • OpenCV is a widely used (and supported) library for computer vision.
  • SimpleCV is similar and snapping at its heels.
  • NLTK is a Python library for processing and understanding natural language.
  • And for the JVM users, OpenNLP is what you’re looking for.

Ethics

Technically minded folks may wonder why I’ve added this point, but it’s something important to me, so I’m sneaking it in.

As we (as businesses and societies) become more reliant on AI to undertake an increasing amount of tasks for us, we need to be careful. I’m not a believer in sci-fi predictions of killer robots, but there are other real and more immediate issues related to AI.

The lack of diversity in the tech industry is especially relevant when it comes to unsupervised automated systems making decisions based on training data. I don’t think engineers intentionally introduce biased data into their machine-learning models, but we are often unaware of our own subconscious biases, especially when there is no one different from us in our team to challenge what we do as “not right.” Remember…

Algorithms aren’t biased, but people are.

Subscribe via Email

Over 60,000 people from companies like Netflix, Apple, Spotify and O'Reilly are reading our articles.
Subscribe to receive a weekly newsletter with articles around Continuous Integration, Docker, and software development best practices.



We promise that we won't spam you. You can unsubscribe any time.

Join the Discussion

Leave us some comments on what you think about this topic or if you like to add something.

  • Pingback: Artificial intelligence What’s Man made Intelligence? - My Blog()

  • Pingback: Artificial intelligence What is Man made Intelligence? - My Blog()

  • Marius

    Kaplan and Haenlein define artificial intelligence as a “system’s ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation”. What do you think? In their article (Kaplan Andreas; Michael Haenlein (2018) Siri, Siri in my Hand, who’s the Fairest in the Land? On the Interpretations, Illustrations and Implications of Artificial Intelligence, Business Horizons), they furthermore analyze how AI is different from related concepts, such as the Internet of Things and big data, and suggest that AI is not one monolithic term but instead needs to be seen in a more nuanced way. This can either be achieved by looking at AI through the lens of evolutionary stages (artificial narrow intelligence, artificial general intelligence, and artificial super intelligence) or by focusing on different types of AI systems (analytical AI, human-inspired AI, and humanized AI). Based on this classification, they show the potential and risk of AI using a series of case studies regarding universities, corporations, and governments. Finally, they present a framework that helps organizations think about the internal and external implications of AI, which they label the Three C Model of Confidence, Change, and Control.