Get Free Shipping on orders over $89
Learning About ChatGPT - Milton V Thomas III

Learning About ChatGPT

By: Milton V Thomas III

eBook | 16 March 2026

At a Glance

eBook


$10.99

or 4 interest-free payments of $2.75 with

Instant Digital Delivery to your Kobo Reader App

The book Learning About ChatGPT by Milton V. Thomas III begins by exploring the fascinating historical and philosophical roots of artificial intelligence. In the first chapter, the author traces humanity's long-standing curiosity about creating artificial beings, from ancient myths and early automata to the foundational work in logic and computation. The chapter dives into the works of thinkers like Aristotle and Descartes, as well as the mechanical advancements of the Enlightenment period, laying the groundwork for the scientific pursuit of AI.

The chapter moves on to pivotal moments in AI history, such as Alan Turing's groundbreaking 1950 paper Computing Machinery and Intelligence, which introduced the Turing Test as a benchmark for measuring machine intelligence. This conceptual leap paved the way for the formal scientific exploration of artificial minds. The Dartmouth Summer Research Project of 1956, where the term "artificial intelligence" was coined, is also covered, highlighting the optimism of early AI pioneers and their belief that machines could replicate all aspects of human intelligence.

As the book progresses, it discusses the early AI successes and the inevitable setbacks, particularly the challenges faced during the "AI winters," when expectations outpaced progress. It examines the rise of expert systems in the 1980s and their commercial applications, particularly in fields like medicine and finance, and how these systems ultimately gave way to more data-driven approaches like machine learning and deep learning.

This sets the stage for the second chapter, which delves into the technology behind modern AI systems like ChatGPT. This chapter introduces Large Language Models (LLMs), explaining how these systems are trained on vast amounts of data to predict the next word in a sequence. The architecture of LLMs, particularly the Transformer model, is explored in-depth, revealing how these models can process and generate human-like text by understanding complex patterns and relationships in language. The chapter also touches on the importance of scale, both in terms of the data used for training and the computational power required to process it.

This introductory segment sets up the broader exploration of AI's historical and technological development, positioning the book to guide readers through the evolving field of AI, culminating in the sophisticated systems we interact with today, like ChatGPT.

on

More in Technology in General

SAFE : Science and Technology in the Age of Ter - Martha Baer

eBOOK