Summary of "[혼자 공부하는 머신러닝+딥러닝] 1강. 인공지능, 머신러닝 그리고 딥러닝이란 무엇인가?"
Overview
This document summarizes an introductory lecture (presented by Park Hye-seon) for a self-study machine learning / deep learning study group based on a Hanmin Media book. The lecture covers preparatory material from Chapters 1.1–1.2, including a brief history of AI, core definitions and relationships (AI, ML, DL), practical tooling guidance, and recommended next steps.
Purpose and context
- Introductory lecture for a self-study ML/DL group (presenter: Park Hye-seon).
- Material is based on a Hanmin Media book; the presenter adapted hand-drawn illustrations from the book for slides.
- Chapters 1.1–1.2 are preparatory/introductory; the course will focus on algorithms most relevant to deep learning with TensorFlow.
High-level timeline / short history of AI
- 1943: MCP neuron model introduced (early artificial neuron concept).
- 1950s: Turing test proposed (Alan Turing) as a way to judge machine intelligence.
- 1956: Dartmouth AI Conference—initial optimism and ambitious predictions about AI.
- 1957: Perceptron (Frank Rosenblatt) — earliest practical neural-network algorithm.
- 1960s: Neuroscience findings (David Hubel & Torsten Wiesel) on edge detectors and hierarchical visual processing influenced neural-network thinking.
- 1970s: First “AI winter” — progress stalled due to limited computing power and unmet expectations.
- 1980s: Expert systems revival followed by decline (second AI winter).
- Late 20th century: Persistent research produced practical neural-net applications (e.g., ZIP-code recognition systems).
- 2012: AlexNet (Alex Krizhevsky, Ilya Sutskever, Geoffrey Hinton) dramatically improved ImageNet performance and sparked wide adoption of deep learning in vision.
- 2015: Google released TensorFlow, which popularized modern deep-learning development (preceded by libraries such as Theano).
- 2016: AlphaGo vs. Lee Sedol brought deep learning into the public spotlight and accelerated library/algorithm development.
Definitions and relationships (core conceptual structure)
- Artificial Intelligence (AI)
- Broad field aimed at building systems that perform tasks requiring intelligence: learning, reasoning, perception.
- Common distinction:
- Strong AI (general AI / superintelligence): Human-level or beyond (fictional example: Samantha from the movie Her).
- Weak AI (narrow AI): Systems specialized for particular tasks (e.g., self-driving cars, voice assistants).
- Machine Learning (ML)
- A subset of AI focused on software and algorithms that learn from data.
- Provides many widely used algorithms and libraries.
- Deep Learning (DL)
- A subset of ML based on artificial neural networks (multi-layer networks).
- Especially prominent for vision, speech, and game-playing tasks.
AI ⊃ ML ⊃ DL
Practical / tooling guidance
- Primary programming environment: Python (widely used in ML/DL).
- Representative libraries and tools:
- scikit-learn — classical machine-learning algorithms (used in the book).
- TensorFlow — deep learning (the book uses TensorFlow with the Keras API).
- Keras — high-level API commonly used with TensorFlow.
- PyTorch — increasingly popular deep-learning library (Facebook).
- Earlier libraries: Theano (historical).
- Google Colab — recommended for interactive hands-on experiments (next lecture topic).
- Recommended practical learning path implied by the lecture/book:
- Learn Python.
- Start with scikit-learn to learn core ML algorithms.
- Move to TensorFlow (with Keras) for deep learning.
- Use Google Colab for hands-on experiments.
Detailed timeline (notable developments and lessons)
- 1943: Warren McCulloch & Walter Pitts — MCP neuron model (foundation for artificial neurons).
- 1950: Alan Turing — Turing Test proposed.
- 1956: Dartmouth AI Conference — high optimism and ambitious predictions.
- 1957: Frank Rosenblatt — perceptron, an early neural-network algorithm.
- 1960s: David Hubel & Torsten Wiesel — neuroscience research on visual cortex edge detectors and hierarchical processing.
- 1970s: First AI winter — hardware limits and unmet expectations slowed research.
- 1980s: Expert systems revival then decline (second AI winter).
- Late 20th century: Practical neural-net systems developed for tasks like ZIP-code recognition (early LeNet-style work).
- 2012: AlexNet — deep convolutional network breakthrough on ImageNet.
- 2015: Google releases TensorFlow — broad adoption of deep-learning tooling.
- 2016: AlphaGo vs. Lee Sedol — public spotlight on deep-learning success; rapid proliferation of libraries and algorithms.
Takeaway lessons and implications
- AI is a broad field; ML and DL are nested subsets.
- Progress in AI historically depends on both algorithmic advances and availability of compute and data; hardware limits and unmet expectations led to AI “winters.”
- Modern AI workflows are highly tool-driven within the Python ecosystem (scikit-learn, TensorFlow/Keras, PyTorch, Colab).
- Current mainstream AI is mostly narrow (weak) AI that augments or automates tasks rather than human-equivalent intelligence.
- A pragmatic learning sequence: Python → scikit-learn → TensorFlow (Keras) → Colab for experiments.
Speakers and sources referenced
- Speaker: Park Hye-seon (presenter).
- Book / publisher: Hanmin Media.
- Historical figures and contributions:
- Warren McCulloch & Walter Pitts — MCP neuron model (1943).
- Alan Turing — Turing Test.
- Dartmouth AI Conference (1956).
- Frank Rosenblatt — perceptron (1957).
- David Hubel & Torsten Wiesel — visual cortex research.
- Early practical neural-net work (e.g., ZIP-code recognition; early LeNet-style contributions often associated with Yann LeCun).
- Alex Krizhevsky, Ilya Sutskever, Geoffrey Hinton — AlexNet (2012).
- Google — TensorFlow.
- Theano — earlier deep-learning library.
- Facebook — PyTorch.
- AlphaGo and Lee Sedol — 2016 match example.
- Companies/products mentioned as examples: Tesla (self-driving), Apple Siri / smartphone voice assistants.
- Libraries/tools highlighted: scikit-learn, TensorFlow, Keras, PyTorch, Google Colab.
Category
Educational
Share this summary
Is the summary off?
If you think the summary is inaccurate, you can reprocess it with the latest model.
Preparing reprocess...