欧博What is Artificial Intelligence?

What is Artificial Intelligence?

April 7, 2025 Published

Tech 101

Artificial Intelligence

HideShow Image

Generative AI uses new approaches like transformers and diffusion models to produce human-like text, images and code.

Social Share

Share

In this article:

Humans have dreamed of building intelligent machines since at least the time of the ancient Greeks, but it wasn’t until digital computers arrived that AI became possible. John McCarthy, a Dartmouth College professor, coined the term "artificial intelligence" in 1955, and despite fluctuating periods of progress and stagnation, AI has been progressively woven into the fabric of daily life. Today, AI is everywhere, from machine learning-powered recommendation systems (Amazon, Netflix), digital assistants (Siri, Alexa) and robot vacuums by the millions.

What is AI and Where Did It Come from?

In essence, AI harnesses computers to discern patterns and predict outcomes from datasets large or small, whether climate trends, personal buying patterns or stacks of pictures. An AI model is a system that can learn from data and make decisions or predictions without being explicitly programmed to do so.

The field's acceleration owes much to Moore's Law and rapid technological innovation, but several structural changes have been pivotal:

Big Data: An explosion of data generation, including the internet's publicly available information, has empowered AI training

Parallel Processing: AI has adopted techniques from supercomputing and cloud computing, enabling the simultaneous use of thousands of processors.

Hardware Accelerators:
Specialized hardware dramatically increases the speed of certain types of computation, like matrix multiplication. Accelerators span from dedicated devices to dedicated pieces of silicon to IP blocks built into a microprocessor.

Low/Mixed-Precision Computing:
AI has benefited from representing numbers with fewer bits, enhancing speed without compromising accuracy.

 

In parallel with these technological changes were advances in a core component that had been part of AI since the beginning: neural networks.

A type of mathematical model inspired by the human brain, neural networks take input (i.e., pictures of cars), pass it through neurons, or nodes within hidden layers of the network, to produce an output (a model of a “car”).

Illustration of a yellow car parked on a gray street. The image is labeled 1. Input at the top. The car is seen from an angled top-down view.

Illustration of a person with short hair using a laptop, positioned inside a square frame. Above the frame, the text reads 2. Parameters training. The person is wearing a white shirt and dark pants.

A process diagram titled 3. Classification displays a faded image of handwriting being analyzed to reveal gender, age, and handedness. It is surrounded by various symbols and graphs, implying data extraction and analysis.

A series of overlapping square shapes appear sequentially within a blue-bordered square labeled 4. Output. The squares inside change in color from white to various shades of blue and turquoise.

How AI is Built

The first major step to build an AI model is called training, which can require immense computational speed and scale (or a lot of patience).

By traversing the neural network forward and backward in a rapid-fire feedback loop, and fitting it to known aspects of the dataset, the AI model “learns” internal variables, also known as parameters, which are adjusted during training to help the model make accurate predictions. The behavior of the network depends on the strengths (or weights) of the connections between neurons.

Illustration showing an animation of a yellow car moving through four stages: input, simulation, training, and validation. The cars position, visuals, and annotations change at each stage, demonstrating a process of evaluation and adjustment.

Text reading 2. Parameters + Classification above a box with the title AI generates parameters. Underneath are smaller boxes titled Decision Trees, Linear Models, Neural Networks, and Design Models. Arrows connect these to Classification at the bottom.

Illustration showing a plain white square with a blue border under the heading 3. Output in bold black text. The square appears empty and is positioned centrally under the heading.

 

Next is fine-tuning, where the pre-trained AI model is adjusted to incorporate new information or better fit a specific dataset or task.

Once a model is trained, it can be applied again and again for inference, which uses the model to take new inputs and generate new, relevant outputs.

From AI that Makes Decisions to AI that Makes Stuff

Deep learning supercharged neural networks by incorporating multiple layers that enhance accuracy and training efficiency. Neural networks and deep learning are behind a lot of the most popular and powerful tools in AI. (They’re not the only ones: Knowledge graphs, reinforcement learning and neuro-symbolic AI are a few other approaches having an impact.)

The most recent technological leaps delivered generative AI (GenAI), where machines can craft human-like responses and original visual content – sort of like having a super-smart, creative assistant that can respond to your questions and iterate on its creations. Today’s GenAI comes in different “flavors,” with each made possible by several key innovations.

 

Transformers are a type of neural network architecture introduced by Google in 2017. Transformers are trained on large bodies of text and require no human supervision as they process entire sentences simultaneously, improving the understanding of words in context by paying attention to the most relevant words. Examples: GPT (Generative Pre-trained Transformer), BERT (Bidirectional Encoder Representations from Transformers). Transformers, and the large language models (LLMs) they have enabled, revolutionized language translation, chatbots and content creation. They’ve even demonstrated emergent capabilities in text generation, coding and weather prediction. Examples: GPT-3, GPT-4.

For image generation, Diffusion models – conceptualized at Stanford in 2015 and inspired by diffusion processes in physics – have shown incredible progress and abilities. Examples: DALL-E 2, Stable Diffusion.

Multimodal models, capable of interpreting various data types, have furthered AI’s comprehension and versatility. Examples: CLIP (Contrastive Language-Image Pretraining), DALL-E.

 

We’re still in the early days of AI. The field’s potential to transform almost every aspect of life is driving fervent investment in computing capacity and research and development; at the sam time, it is generating excitement and concern globally.

Summary

Ready to discover more?

The Intel Tech 101 series mixes visuals and descriptions to break down complex subjects and demystify the technology we use every day.

» Read more

Download all media

Copy text from article

Related Posts

News

Client Computing

Office scene with five people working at desks, viewed through a large window. One person stands reading, another works at a computer. Logos for Intel Core Ultra and Deloitte appear on the left side of the image.

Intel Core Ultra Speeds Routine Tasks by 50% for Deloitte’s AI

At Intel Vision, the consulting firm discusses its strategic investment in AI PCs and how it has driven efficiency gains.

April 3, 2025

News

Data Center

Two Intel Xeon processors against a blue gradient background. The left processor is labeled Intel Xeon and the right processor is labeled Intel Xeon 6 processor, both featuring metallic casings.

Intel Xeon Remains Only Server CPU on MLPerf

Intel Xeon 6 with Performance-cores achieved an average 1.9x performance improvement over 5th Gen Xeon processors.

April 2, 2025

News

Artificial Intelligence

An Intel Core i7 processor is displayed on the left against a blue background. On the right, a computer screen shows surveillance software with various video feeds. The top right features the Network Optix logo.

Rockets to Retail: Intel Core Ultra Delivers Edge AI for Video Management

At Intel Vision, Network Optix debuts natural language prompt prototype to redefine video management, offering industries faster AI-driven insights and efficiency.

April 2, 2025

News

Client Computing

Two men stand in front of a tech display featuring a large monitor and two laptops. One wears a green t-shirt and jeans, the other a blue vest and shirt. The background is a tech exhibit with blue and purple lighting.

Postcard from Vision: EdgeRunner Athena Delivers AI with a Side of Security

Optimized for Intel’s newest processors, Athena serves as a hyper-personalized assistant without the need for internet connectivity.

2025-04-10 13:34 点击量:22