Skip to content

metamath1/fund-genai

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

11 Commits
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

Generative AI Fundamental Course Repository

This repository is designed to provide fundametal knowledge and practical skills in generative AI, including Transformer models, Large Language Models, and Image Generative AI. ์ด ๋ฆฌํฌ์ง€ํ† ๋ฆฌ๋Š” ํŠธ๋žœ์Šคํฌ๋จธ ๋ชจ๋ธ, ๋Œ€๊ทœ๋ชจ ์–ธ์–ด ๋ชจ๋ธ, ์ด๋ฏธ์ง€ ์ƒ์„ฑ AI ๋“ฑ ์ƒ์„ฑ AI์˜ ๊ธฐ์ดˆ์ ์ธ ์ง€์‹๊ณผ ์‹ค๋ฌด ๊ธฐ์ˆ ์„ ์ œ๊ณตํ•˜๊ธฐ ์œ„ํ•ด ์„ค๊ณ„๋˜์—ˆ์Šต๋‹ˆ๋‹ค.

This content is part of the Zero to AI Master program conducted by Daegu AI-Hub. ์ด ์ฝ˜ํ…์ธ ๋Š” ๋Œ€๊ตฌ AI ํ—ˆ๋ธŒ์—์„œ ์ง„ํ–‰ํ•˜๋Š” ์ œ๋กœ ํˆฌ AI ๋งˆ์Šคํ„ฐ ํ”„๋กœ๊ทธ๋žจ์˜ ์ผํ™˜์ž…๋‹ˆ๋‹ค.


๐Ÿ“š Course Contents

1. Transformer

  • Deep Dive into Transformer Models
    Detailed analysis and understanding of the Transformer architecture. ํŠธ๋žœ์Šคํฌ๋จธ ์•„ํ‚คํ…์ฒ˜์— ๋Œ€ํ•œ ์ž์„ธํ•œ ๋ถ„์„๊ณผ ์ดํ•ด
  • Predicting Simple Sequences with Transformers
    Using torch.nn.Transformer to Predict Simple Sequences. ๊ฐ„๋‹จํ•œ ์ˆ˜์—ด์„ ์˜ˆ์ธกํ•˜๊ธฐ ์œ„ํ•œ torch.nn.Transformer ์‚ฌ์šฉ๋ฒ•
  • Fine-Tuning GPT-2 for News Headline Generation
    Hands-on project to generate news headlines by fine-tuning GPT-2. GPT-2๋ฅผ ๋ฏธ์„ธ ์กฐ์ •ํ•˜์—ฌ ๋‰ด์Šค ํ—ค๋“œ๋ผ์ธ์„ ์ƒ์„ฑํ•˜๋Š” ์‹ค์Šต ํ”„๋กœ์ ํŠธ
  • Fine-Tuning BERT for NSMC Classification
    Hands-on with fine-tuning BERT with Naver Sentiment Movie Corpus (NSMC) ๋„ค์ด๋ฒ„ ๊ฐ์„ฑ ๋ฌด๋น„ ์ฝ”ํผ์Šค(NSMC)๋กœ BERT๋ฅผ ํŒŒ์ธํŠœ๋‹ํ•˜๋Š” ์‹ค์Šต ํ”„๋กœ์ ํŠธ

2. Large Language Models (LLMs)

  • Key Technologies Leading to LLMs
    A review of essential advancements that enabled the development of LLMs. LLM์˜ ๋ฐœ์ „์„ ๊ฐ€๋Šฅํ•˜๊ฒŒ ํ•œ ํ•„์ˆ˜์ ์ธ ๊ณผ์ •์— ๋Œ€ํ•œ ๋ฆฌ๋ทฐ

  • Utilizing OpenAI API and Prompt Engineering
    Practical usage of ChatGPT and prompt engineering techniques. ChatGPT API์˜ ์‹ค์ œ ์‚ฌ์šฉ๋ฒ•๊ณผ ํ”„๋กฌํ”„ํŠธ ์—”์ง€๋‹ˆ์–ด๋ง ๊ธฐ๋ฒ•

  • LangChain Basics and RAG App Development
    Introduction to LangChain and a project for building a Retrieval-Augmented Generation (RAG) application. LangChain ์†Œ๊ฐœ ๋ฐ ๊ฒ€์ƒ‰ ์ฆ๊ฐ• ์ƒ์„ฑ(RAG) ์• ํ”Œ๋ฆฌ์ผ€์ด์…˜ ๊ตฌ์ถ• ํ”„๋กœ์ ํŠธ

    without RAG RAG
    without RAG RAG
    ๋ฌธ๋ฌด๋Œ€์™•ํ•จ์— ๋Œ€ํ•œ ์ž˜๋ชป๋œ ์‘๋‹ต pdf ๋ฌธ์„œ๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•œ ๋ฌธ๋ฌด๋Œ€์™•ํ•จ์— ๋Œ€ํ•œ ๋ฐ”๋ฅธ ์‘๋‹ต

3. Image Generative AI

  • Introduction to AutoEncoders and Variational AutoEncoders
    Theory and hands-on sessions for understanding AutoEncoders and VAE. ์˜คํ†  ์ธ์ฝ”๋”์™€ VAE๋ฅผ ์ดํ•ดํ•˜๊ธฐ ์œ„ํ•œ ์ด๋ก  ๋ฐ ์‹ค์Šต
  • Denoising Diffusion Models
    • Overview of Denoising Diffusion Probabilistic Models (DDPM). ๋…ธ์ด์ฆˆ ์ œ๊ฑฐ ํ™•์‚ฐ ํ™•๋ฅ  ๋ชจ๋ธ(DDPM) ๊ฐœ์š”
    • Proof-of-Concept (PoC) implementation of unconditional DDPM. ๋ฌด์กฐ๊ฑด๋ถ€ DDPM์˜ ๊ฐœ๋… ์ฆ๋ช…(PoC) ๊ตฌํ˜„
    • PoC implementation of conditional DDPM. ์กฐ๊ฑด๋ถ€ DDPM์˜ ๊ฐœ๋… ์ฆ๋ช…(PoC) ๊ตฌํ˜„
  • Latent Diffusion Models (LDMs)
    • Introduction to LDMs and their applications. LDM๊ณผ ๊ทธ ํ™œ์šฉ์— ๋Œ€ํ•œ ์†Œ๊ฐœ
    • PoC implementation of unconditional and conditional LDM. ๋ฌด์กฐ๊ฑด๋ถ€ ๋ฐ ์กฐ๊ฑด๋ถ€ LDM์˜ PoC ๊ตฌํ˜„
  • HuggingFace ๐Ÿค—Diffusers Framework
    • Introducing ๐Ÿค—Diffusers Library for Image Generation Tasks. ์ด๋ฏธ์ง€ ์ƒ์„ฑ ์ž‘์—…์„ ์œ„ํ•œ ๋””ํ“จ์ € ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ ์†Œ๊ฐœ
    • Training an Image Generation Model with ๐Ÿค—Diffusers. ๋””ํ“จ์ € ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์ด์šฉํ•œ ์ด๋ฏธ์ง€ ์ƒ์„ฑ ๋ชจ๋ธ ํ•™์Šต
    • Exploring different ๐Ÿค—Diffusers pipelines, including the implementation of image2image and inpainting pipeline from scratch. image2image ๋ฐ inpainting ํŒŒ์ดํ”„๋ผ์ธ์œ„ ๋ฐ‘๋ฐ”๋‹ฅ ๊ตฌํ˜„ ๋ฐ ๋‹ค์–‘ํ•œ ๐Ÿค—๋””ํ“จ์ € ํŒŒ์ดํ”„๋ผ์ธ ํƒ์ƒ‰

    Image-to-Image Example:

Input Image and Generated Image
Output

Inpainting Example:

Input Image and Generated Image
Output
  • inpainting app project using ๐Ÿค—Diffusers and gradio.

    Inpainting Gradio App

without Prompt with Prompt
null prompt prompt
None A small robot, high resolution, sitting on a park bench
  • Stable Diffusion Fine-Tuning
    • SD 1.5 Model Full Fine-Tuning ์Šคํ…Œ์ด๋ธ” ๋””ํ“จ์ „ 1.5 ํ’€ ํŒŒ์ธํŠœ๋‹
    • LoRA adapter training using PEFT (Parameter Efficient Fine-Tuning). ์Šคํ…Œ์ด๋ธ” ๋””ํ“จ์ „ 1.5 LoRA๋ฅผ ์ด์šฉํ•œ ํŒŒ์ธํŠœ๋‹
      SD 1.5 fine-tuning

      without RAG


๐Ÿ“œ License

This project is licensed under the Non-Commercial Use Only License.

โš ๏ธ Restrictions

  • Non-Commercial Use Only: This software is provided for personal, educational, and non-commercial purposes only.
  • Commercial Use Prohibited: Commercial use of this software is strictly prohibited without prior written consent from the copyright holder.
  • For inquiries about commercial licensing, please contact metamath@gmail.com.

๐Ÿ“ง Contact

For questions or further information, please reach out to:
๐Ÿ“ฉ Email: metamath@gmail.com ๐ŸŒ Website: https://metamath1.github.io/blog

About

Generative AI Fundamental Course Repository

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published