This repository is designed to provide fundametal knowledge and practical skills in generative AI, including Transformer models, Large Language Models, and Image Generative AI. ์ด ๋ฆฌํฌ์งํ ๋ฆฌ๋ ํธ๋์คํฌ๋จธ ๋ชจ๋ธ, ๋๊ท๋ชจ ์ธ์ด ๋ชจ๋ธ, ์ด๋ฏธ์ง ์์ฑ AI ๋ฑ ์์ฑ AI์ ๊ธฐ์ด์ ์ธ ์ง์๊ณผ ์ค๋ฌด ๊ธฐ์ ์ ์ ๊ณตํ๊ธฐ ์ํด ์ค๊ณ๋์์ต๋๋ค.
This content is part of the Zero to AI Master program conducted by Daegu AI-Hub. ์ด ์ฝํ ์ธ ๋ ๋๊ตฌ AI ํ๋ธ์์ ์งํํ๋ ์ ๋ก ํฌ AI ๋ง์คํฐ ํ๋ก๊ทธ๋จ์ ์ผํ์ ๋๋ค.
- Deep Dive into Transformer Models
Detailed analysis and understanding of the Transformer architecture. ํธ๋์คํฌ๋จธ ์ํคํ ์ฒ์ ๋ํ ์์ธํ ๋ถ์๊ณผ ์ดํด - Predicting Simple Sequences with Transformers
Using torch.nn.Transformer to Predict Simple Sequences. ๊ฐ๋จํ ์์ด์ ์์ธกํ๊ธฐ ์ํ torch.nn.Transformer ์ฌ์ฉ๋ฒ - Fine-Tuning GPT-2 for News Headline Generation
Hands-on project to generate news headlines by fine-tuning GPT-2. GPT-2๋ฅผ ๋ฏธ์ธ ์กฐ์ ํ์ฌ ๋ด์ค ํค๋๋ผ์ธ์ ์์ฑํ๋ ์ค์ต ํ๋ก์ ํธ - Fine-Tuning BERT for NSMC Classification
Hands-on with fine-tuning BERT with Naver Sentiment Movie Corpus (NSMC) ๋ค์ด๋ฒ ๊ฐ์ฑ ๋ฌด๋น ์ฝํผ์ค(NSMC)๋ก BERT๋ฅผ ํ์ธํ๋ํ๋ ์ค์ต ํ๋ก์ ํธ
-
Key Technologies Leading to LLMs
A review of essential advancements that enabled the development of LLMs. LLM์ ๋ฐ์ ์ ๊ฐ๋ฅํ๊ฒ ํ ํ์์ ์ธ ๊ณผ์ ์ ๋ํ ๋ฆฌ๋ทฐ -
Utilizing OpenAI API and Prompt Engineering
Practical usage of ChatGPT and prompt engineering techniques. ChatGPT API์ ์ค์ ์ฌ์ฉ๋ฒ๊ณผ ํ๋กฌํํธ ์์ง๋์ด๋ง ๊ธฐ๋ฒ -
LangChain Basics and RAG App Development
Introduction to LangChain and a project for building a Retrieval-Augmented Generation (RAG) application. LangChain ์๊ฐ ๋ฐ ๊ฒ์ ์ฆ๊ฐ ์์ฑ(RAG) ์ ํ๋ฆฌ์ผ์ด์ ๊ตฌ์ถ ํ๋ก์ ํธwithout RAG RAG 

๋ฌธ๋ฌด๋์ํจ์ ๋ํ ์๋ชป๋ ์๋ต pdf ๋ฌธ์๋ฅผ ๊ธฐ๋ฐ์ผ๋ก ํ ๋ฌธ๋ฌด๋์ํจ์ ๋ํ ๋ฐ๋ฅธ ์๋ต
- Introduction to AutoEncoders and Variational AutoEncoders
Theory and hands-on sessions for understanding AutoEncoders and VAE. ์คํ ์ธ์ฝ๋์ VAE๋ฅผ ์ดํดํ๊ธฐ ์ํ ์ด๋ก ๋ฐ ์ค์ต - Denoising Diffusion Models
- Overview of Denoising Diffusion Probabilistic Models (DDPM). ๋ ธ์ด์ฆ ์ ๊ฑฐ ํ์ฐ ํ๋ฅ ๋ชจ๋ธ(DDPM) ๊ฐ์
- Proof-of-Concept (PoC) implementation of unconditional DDPM. ๋ฌด์กฐ๊ฑด๋ถ DDPM์ ๊ฐ๋ ์ฆ๋ช (PoC) ๊ตฌํ
- PoC implementation of conditional DDPM. ์กฐ๊ฑด๋ถ DDPM์ ๊ฐ๋ ์ฆ๋ช (PoC) ๊ตฌํ
- Latent Diffusion Models (LDMs)
- Introduction to LDMs and their applications. LDM๊ณผ ๊ทธ ํ์ฉ์ ๋ํ ์๊ฐ
- PoC implementation of unconditional and conditional LDM. ๋ฌด์กฐ๊ฑด๋ถ ๋ฐ ์กฐ๊ฑด๋ถ LDM์ PoC ๊ตฌํ
- HuggingFace ๐คDiffusers Framework
- Introducing ๐คDiffusers Library for Image Generation Tasks. ์ด๋ฏธ์ง ์์ฑ ์์ ์ ์ํ ๋ํจ์ ๋ผ์ด๋ธ๋ฌ๋ฆฌ ์๊ฐ
- Training an Image Generation Model with ๐คDiffusers. ๋ํจ์ ๋ผ์ด๋ธ๋ฌ๋ฆฌ๋ฅผ ์ด์ฉํ ์ด๋ฏธ์ง ์์ฑ ๋ชจ๋ธ ํ์ต
- Exploring different ๐คDiffusers pipelines, including the implementation of image2image and inpainting pipeline from scratch. image2image ๋ฐ inpainting ํ์ดํ๋ผ์ธ์ ๋ฐ๋ฐ๋ฅ ๊ตฌํ ๋ฐ ๋ค์ํ ๐ค๋ํจ์ ํ์ดํ๋ผ์ธ ํ์
| Input Image and Generated Image |
|---|
![]() |
| Input Image and Generated Image |
|---|
![]() |
- inpainting app project using ๐คDiffusers and gradio.
| without Prompt | with Prompt |
|---|---|
![]() |
![]() |
| None | A small robot, high resolution, sitting on a park bench |
- Stable Diffusion Fine-Tuning
- SD 1.5 Model Full Fine-Tuning ์คํ ์ด๋ธ ๋ํจ์ 1.5 ํ ํ์ธํ๋
- LoRA adapter training using PEFT (Parameter Efficient Fine-Tuning). ์คํ ์ด๋ธ ๋ํจ์ 1.5 LoRA๋ฅผ ์ด์ฉํ ํ์ธํ๋
This project is licensed under the Non-Commercial Use Only License.
- Non-Commercial Use Only: This software is provided for personal, educational, and non-commercial purposes only.
- Commercial Use Prohibited: Commercial use of this software is strictly prohibited without prior written consent from the copyright holder.
- For inquiries about commercial licensing, please contact metamath@gmail.com.
For questions or further information, please reach out to:
๐ฉ Email: metamath@gmail.com
๐ Website: https://metamath1.github.io/blog




