๐Ÿ“—
smiley book
  • Smiley Books
  • AI
    • Readme
    • openai-whisper
      • ์ƒ˜ํ”Œ ์‹คํ–‰ํ•ด๋ณด๊ธฐ
      • GPU ์„œ๋ฒ„ ์ค€๋น„ํ•˜๊ธฐ
      • API๋กœ whisper๋ฅผ ์™ธ๋ถ€์— ์˜คํ”ˆํ•˜๊ธฐ
      • ํ”„๋กฌํ”„ํŠธ ์ง€์›
      • ์‹ค์‹œ๊ฐ„ message chat
      • ํ™”๋ฉด ์ด์˜๊ฒŒ ๋งŒ๋“ค๊ธฐ์™€ ๋กœ๊ทธ์ธ
      • ํŒŒ์ด์ฌ ๊ฐ€์ƒํ™˜๊ฒฝ
      • ์‹ค์‹œ๊ฐ„ voice chat
      • fine tunning(๋ฏธ์„ธ ์กฐ์ •) ์œผ๋กœ ์„ฑ๋Šฅ ์˜ฌ๋ฆฌ๊ธฐ
      • app์—์„œ api๋ฅผ ํ˜ธ์ถœํ•˜์—ฌ ์‹ค์‹œ๊ฐ„์œผ๋กœ ํ…์ŠคํŠธ๋กœ ๋ฐ”๊ฟ”๋ณด๊ธฐ
    • ollama - llm์„ ์‰ฝ๊ฒŒ ๋‚ด์ปด์—์„œ ์‹คํ–‰
      • ollama webui
      • ollama docker
    • stable diffusion
      • SDXL - text to image
      • SD-webui
    • ChatGPT
      • ๋‹ต๋ณ€์ด ๋Š๊ธธ๋•Œ
      • ์—ญํ• ์„ ์ •ํ•˜์ž
      • ๊ตฌ์ฒด์ ์ธ ์งˆ๋ฌธ
      • ๊ฒฐ๊ณผํ˜•ํƒœ๋ฅผ ์ง€์ •
      • ํ”„๋กฌํ”„ํŠธ๋ฅผ ์—ฌ๋Ÿฌ์ค„๋กœ ์‚ฌ์šฉํ•˜์ž.
      • ๋งˆํ‹ด ํŒŒ์šธ๋Ÿฌ ๊ธ€ ๋ฒˆ์—ญ๋ณธ
    • Prompt Engineering
    • Auto-GPT
    • Gemini
      • google ai studio
      • gemini-api
      • embedding guide
    • Huggingface
      • huggingface ์‚ฌ์šฉ๋ฒ•
      • huggingface nlp ๊ณต๋ถ€์ค‘
    • kaggle
      • download dataset
    • langchain
      • langchain์„ ๊ณต๋ถ€ํ•˜๋ฉฐ ์ •๋ฆฌ
      • basic
      • slackbot
      • rag
      • document-loader
      • website-loader
      • confluence
      • memory
      • function-call
      • langsmith
      • agent-toolkit
  • Ansible
    • templates vs files and jinja2
    • dynamic inventory
    • limit ์˜ต์…˜ ๊ฐ•์ œํ•˜๊ธฐ
    • limit ์‚ฌ์šฉํ›„ gather_fact ๋ฌธ์ œ
  • AWS
    • AWS CLI
    • EKS
      • cluster manage
      • ALB Controller
      • external-dns
      • fargate
    • ECR
    • S3
    • Certificate Manager
  • Azure
    • Azure AD OAuth Client Flow
  • Container
    • Registry
    • ๋นŒ๋“œ์‹œ์— env๊ฐ’ ์„ค์ •ํ•˜๊ธฐ
  • DB
    • PXC
      • Operator
      • PMM
      • ์‚ญ์ œ
      • GTID
      • Cross Site Replication
    • Mssql
    • Mysql
  • dotnet
    • Thread Pool
    • Connection Pool
    • Thread Pool2
  • Devops
    • Recommendation
  • GIT
    • Basic
    • Submodule
  • GitHub
    • Repository
    • GitHub Action
    • GitHub PR
    • Self Hosted Runner
    • GitHub Webhook
  • GitLab
    • CI/CD
    • CI/CD Advance
    • Ssl renew
    • CI/CD Pass env to other job
  • Go Lang
    • ๊ฐœ๋ฐœ ํ™˜๊ฒฝ ๊ตฌ์ถ•
    • multi os binary build
    • kubectl๊ฐ™์€ cli๋งŒ๋“ค๊ธฐ
    • azure ad cli
    • embed static file
    • go study
      • pointer
      • module and package
      • string
      • struct
      • goroutine
  • Kubernetes
    • Kubernetes๋Š” ๋ฌด์—‡์ธ๊ฐ€
    • Tools
    • Install with kubespray
    • Kubernetes hardening guidance
    • 11 ways not to get hacked
    • ArgoCD
      • Install
      • CLI
      • Repository
      • Apps
      • AWS ALB ์‚ฌ์šฉ
      • Notification slack
      • Backup / DR
      • Ingress
      • 2021-11-16 Github error
      • Server Config
      • auth0 ์ธ์ฆ ์ถ”๊ฐ€(oauth,OIDC)
    • Extension
      • Longhorn pvc
      • External dns
      • Ingress nginx
      • Cert Manager
      • Kube prometheus
    • Helm
      • Subchart
      • Tip
    • Loki
    • Persistent Volume
    • TIP
      • Job
      • Pod
      • Log
  • KAFKA
    • raft
  • KVM
    • kvm cpu model
  • Linux
    • DNS Bind9
      • Cert-Manager
      • Certbot
      • Dynamic Update
      • Log
    • Export and variable
    • Grep ์‚ฌ์šฉ๋ฒ•
  • Modeling
    • C4 model introduce
    • Mermaid
    • reference
  • Monitoring
    • Readme
    • 0. What is Monitoring
    • 1. install prometheus and grafana
    • 2. grafana provisioning
    • 3. grafana dashboard
    • 4. grafana portable dashboard
    • 5. prometheus ui
    • 6. prometheus oauth2
    • Prometheus
      • Metric type
      • basic
      • rate vs irate
      • k8s-prometheus
    • Grafana
      • Expolorer
    • Node Exporter
      • advance
      • textfile collector
  • Motivation
    • 3 Simple Rule
  • OPENNEBULA
    • Install(ansible)
    • Install
    • Tip
    • Windows vm
  • Reading
    • comfort zone
    • ๋ฐฐ๋ ค
    • elon musk 6 rule for insane productivity
    • Feynman Technique
    • how to interview - elon musk
    • ๊ฒฝ์ฒญ
    • Readme
  • Redis
    • Install
    • Master-slave Architecture
    • Sentinel
    • Redis Cluster
    • Client programming c#
  • SEO
    • Readme
  • Security
    • criminalip.io
      • criminalip.io
  • Stock
    • robinhood-python
  • Terraform
    • moved block
    • output
  • vault
    • Readme
  • VS Code
    • dev container
    • dev container on remote server
  • Old fashione trend
    • curity
    • MAAS
      • Install maas
      • Manage maas
      • Tip
Powered by GitBook
On this page
  • ํ•ต์‹ฌ ์ •๋ฆฌ
  • LangChain ๊ฐœ๋ฐœ ํ™˜๊ฒฝ ์„ค์น˜
  • devcontainer
  • ๊ธฐ๋ณธ ๊ณผ์ •
  • setup .env file
  • read .env file
  • test api key
  • prompt
  • openai model and price
  • LLM model vs Chat Model
  • Message Type
  • PIPELINE
  • chain
  • LCEL (LangChain Express Language)
  • Invoke
  • batch
  • stream: ์‹ค์‹œ๊ฐ„ ์ถœ๋ ฅ
  • ์—ฌ๋Ÿฌ๊ฐœ chain์„ ์—ฐ๊ฒฐ
  • Runnable interface
  • ํ”„๋กฌํ”„ํŠธ ํ…œํ”Œ๋ฆฟ์˜ ํ™œ์šฉ
  • PromptTemplate : ํ”„๋กฌํ”„ํŠธ ๋ฌธ์ž์—ด์„ ๋งŒ๋“œ๋Š” ๋ฐ ์‚ฌ์šฉ๋˜๋Š” ํ…œํ”Œ๋ฆฟ
  • 2๊ฐœ ์ด์ƒ์˜ ๋ณ€์ˆ˜๋ฅผ ํ…œํ”Œ๋ฆฟ ์•ˆ์— ์ •์˜
  • PromptTemplate vs ChatPromptTemplate
  • output parser
  • Prompt engineering
  • ์ •๋ฆฌ
  • ๊ธฐํƒ€ ๊ณ ๋ฏผ๋“ค
  • ์™œ langchain์„ ์จ์•ผํ• ๊ฐ€?
  • ๊ตฌ๊ธ€ AI๋ฅผ ์จ๋ณด์ž.

Was this helpful?

  1. AI
  2. langchain

basic

ํ•ต์‹ฌ ์ •๋ฆฌ

  1. llm์ด๋‚˜ ๋จธ์‹ ๋Ÿฌ๋‹๋“ฑ ์–ด๋–ป๊ฒŒ ์ด๊ฑธ ๋งŒ๋“œ๋Š”๊ฐ€๋Š” ์˜ค๋Š˜ ํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ํ™œ์šฉ์„ ํ•˜๋Š”๊ฑฐ๋งŒ ๊ณ ๋ฏผํ•ฉ๋‹ˆ๋‹ค. ์™œ๋ƒ๋ฉด ์ œ๊ฐ€ ์ดˆ๋ณด๋ผ..

  2. llm์€ ๊ทธ๋ƒฅ ์ž…์ฝ”๋”์ด๋‹ค. ๋‚˜๋จธ์ง€๋Š” ์ „๋ถ€ ๊ฐœ๋ฐœ์ž๊ฐ€. ๋ง๋งŒ ํ•ด์ฃผ์ง€ ์ง์ ‘ ์•„๋ฌด๊ฒƒ๋„ ํ•ด์ฃผ์ง€ ์•Š๋Š”๋‹ค.

  3. ๊ณต์‹ ๋ฌธ์„œ๋ฅผ ํ•ญ์ƒ ์ž˜ ๋ณด๋Š” ๋ฒ„๋ฆ‡์„ ๋“ค์ž…์‹œ๋‹ค.

  4. ๋ณ€ํ™”๊ฐ€ ์•„์ฃผ ๋นจ๋ผ์„œ ๊ธฐ์กด๋ฐฉ์‹์˜ ์ƒ˜ํ”Œ์ฝ”๋“œ๋“ค์ด ๋„ˆ๋ฌด ๋งŽ์ด ์ธํ„ฐ๋„ท์— ๋ณด์ž…๋‹ˆ๋‹ค. ํ™•์ธํ•ด์„œ ์ตœ์‹ ๋ฒ„์ „์œผ๋กœ ๊ณต๋ถ€ํ•ฉ์‹œ๋‹ค.

LangChain ๊ฐœ๋ฐœ ํ™˜๊ฒฝ ์„ค์น˜

  • docker

  • vscode

  • devcontainer

  • python

  • jupyter - on container

devcontainer

mkdir devcontainer
vi devcontainer/devcontainer.json
{
  "name": "dev",
  "image": "mcr.microsoft.com/devcontainers/python:1-3.12-bullseye"
  // "postCreateCommand": "pip3 install --user -r requirements.txt",
  // "customizations": {
  //   "vscode": {
  //     "extensions": ["ms-toolsai.jupyter"]
  //   }
  // },
}

๊ธฐ๋ณธ ๊ณผ์ •

setup .env file

api key๋ฅผ ๋ฐœ๊ธ‰๋ฐ›์•„์„œ .env ํŒŒ์ผ์— ์ €์žฅ

https://platform.openai.com/settings/profile?tab=api-keys

OPENAI_API_KEY=sk-xxx

openai์—์„œ Project key๋ฅผ ์‚ฌ์šฉํ•˜๋Š”๊ฑธ ์ถ”์ฒœ

https://platform.openai.com/api-keys

read .env file

%pip install  --user -Uq  python-dotenv
from dotenv import load_dotenv
load_dotenv()

test api key

%pip install  --user -Uq  langchain langchain-community langchain-core langchain-openai

import openai

openai.__version__

# version์ด ๋‚˜์˜ค๋ฉด ์—ฐ๊ฒฐ ์„ฑ๊ณต

์ด์ œ ์—ฐ๊ฒฐํ•ด๋ณผ๊ฐ€์š”?

from langchain_openai import ChatOpenAI

llm = ChatOpenAI(
  # model="gpt-3.5-turbo",  # ๋ชจ๋ธ ์ด๋ฆ„
  temperature = 0.1, # ์ฐฝ์˜์„ฑ (0.0 ~ 2.0)
)

# ์งˆ์˜๋‚ด์šฉ
prompt = "๋Œ€ํ•œ๋ฏผ๊ตญ์˜ ์ˆ˜๋„๋Š” ์–ด๋””์ธ๊ฐ€์š”?"

# ์งˆ์˜
print(f"[๋‹ต๋ณ€]: {llm.invoke(prompt)}")

prompt

๋ง์„ ์ „๋‹ฌํ•˜๋Š”๊ฒƒ์„ ํ”„๋กฌํ”„ํŠธ๋ผ๊ณ  ํ•œ๋‹ค. ์ด ์šฉ์–ด๋ฅผ ์ฒ˜์Œ์— ๋“ค์—‡์„๋•Œ ์ฐธ ์ดํ•ด๊ฐ€ ์•ˆ๋ฌ๋‹ค. ๊ทธ๋ƒฅ ๋‚ด๊ฐ€ ํ•˜๋Š” ๋ง์ด๋‹ค.

openai model and price

https://platform.openai.com/docs/models

https://openai.com/api/pricing/

context windows๊ฐ€ ์ปค์ ธ์„œ ์š”์ฆ˜์€ ๊ตฌ์ง€ ํŒŒ์ผ์„ ์งค๋ผ์•ผํ•˜๋‚˜ ํ•˜๋Š” ๋А๋‚Œ์ด ์žˆ์Œ. ๊ทธ๋Ÿฐ๋ฐ ๋น„์šฉ์ด ๋งŽ์ด ๋‚˜์˜ค๊ฒŸ์ง€..

  • context windows ๋ณด๊ธฐ

  • ๊ฐ€๊ฒฉ ๋ณด๊ธฐ

gpt-3.5-turbo๋ฅผ ์“ฐ๊ฒŸ์Œ. -> 0.5usd / 1M

gpt-4o -> 5usd / 1M

https://platform.openai.com/docs/deprecations (deprecated)

๊ธฐ์กด์—๋Š” gpt-3.5-turbo ๊ฐ€ 4000 token์ด max ์ง€๊ธˆ์€ ๋ชจ๋ธ์ด ์—…๋ฐ์ดํŠธ๋˜์„œ gpt-3.5-turbo-0125 ์ด๊ฑธ ์ž๋™์œผ๋กœ ๊ฐ€๋ฅดํ‚ค๊ณ  ์žˆ์Œ.

16,385 tokens ์„ ๋ฐ›์„์ˆ˜ ์žˆ๋‹ค. 4๋ฐฐ๊ฐ€ ๋งŽ์ง€๋งŒ ์—ฌ์ „ํžˆ ์ข€ ์ ์€ ๊ฐ์€ ์žˆ๋‹ค.

LLM model vs Chat Model

llm mode์™€ chat mode๋Š” OpenAI์˜ GPT-3 ์–ธ์–ด ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•  ๋•Œ ์„ ํƒํ•  ์ˆ˜ ์žˆ๋Š” ๋‘ ๊ฐ€์ง€ ๋ชจ๋“œ์ž…๋‹ˆ๋‹ค.

LLM Mode (Language Model Mode): ์ด ๋ชจ๋“œ๋Š” ์ฃผ์–ด์ง„ ์ž…๋ ฅ์— ๋Œ€ํ•ด ๊ฐ€์žฅ ๊ฐ€๋Šฅ์„ฑ ์žˆ๋Š” ๋‹ค์Œ ํ…์ŠคํŠธ๋ฅผ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. ์ด๋Š” ๋ฌธ์žฅ ์™„์„ฑ, ๋ฌธ์„œ ์ƒ์„ฑ ๋“ฑ์˜ ์ž‘์—…์— ์ฃผ๋กœ ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, "์˜ค๋Š˜ ๋‚ ์”จ๋Š”"์ด๋ผ๋Š” ํ”„๋กฌํ”„ํŠธ์— ๋Œ€ํ•ด "๋ง‘์Šต๋‹ˆ๋‹ค"๋ผ๋Š” ํ…์ŠคํŠธ๋ฅผ ์ƒ์„ฑํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.

Chat Mode: ์ด ๋ชจ๋“œ๋Š” ๋Œ€ํ™”ํ˜• ์ž‘์—…์— ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค. ์‚ฌ์šฉ์ž์˜ ์ž…๋ ฅ์— ๋Œ€ํ•œ ์‘๋‹ต์„ ์ƒ์„ฑํ•˜๋Š” ๊ฒƒ์ด ๋ชฉํ‘œ์ž…๋‹ˆ๋‹ค. ์ด๋Š” ์ฑ—๋ด‡, ๋Œ€ํ™”ํ˜• ์‹œ์Šคํ…œ ๋“ฑ์˜ ์ž‘์—…์— ์ฃผ๋กœ ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, ์‚ฌ์šฉ์ž๊ฐ€ "์˜ค๋Š˜ ๋‚ ์”จ๋Š” ์–ด๋•Œ?"๋ผ๊ณ  ๋ฌผ์œผ๋ฉด, "์˜ค๋Š˜์€ ๋ง‘์€ ๋‚ ์”จ์ž…๋‹ˆ๋‹ค"๋ผ๊ณ  ์‘๋‹ตํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.

๋”ฐ๋ผ์„œ, llm mode์™€ chat mode์˜ ์ฃผ์š” ์ฐจ์ด์ ์€ ๊ทธ๋“ค์ด ํ•ด๊ฒฐํ•˜๋ ค๋Š” ์ž‘์—…์˜ ์œ ํ˜•์— ์žˆ์Šต๋‹ˆ๋‹ค.

  • llm mode๋Š” ์ผ๋ฐ˜์ ์ธ ์–ธ์–ด ๋ชจ๋ธ๋ง ์ž‘์—…์—

  • chat mode๋Š” ๋Œ€ํ™”ํ˜• ์ž‘์—…์— ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค.

https://python.langchain.com/v0.2/docs/integrations/chat/openai/

vs

https://python.langchain.com/v0.2/docs/integrations/llms/openai/

OpenAI๋Š” gpt-3.5-turbo-instruct์ด๊ฑฐ ์“ธ๋•Œ๋งŒ ์“ด๋‹ค. ์ด๊ฑฐ ์“ธ๊ฑฐ์•„๋‹ˆ๋ฉด chatOpenAI๋ฅผ ์“ฐ์ž.

# sample of OpenAI

from langchain_openai import OpenAI
llm = OpenAI(
    model="gpt-3.5-turbo-instruct",
)

# ์งˆ์˜๋‚ด์šฉ
question = "๋Œ€ํ•œ๋ฏผ๊ตญ์˜ ์ˆ˜๋„๋Š” ์–ด๋””์ธ๊ฐ€์š”?"

# ์งˆ์˜
print(f"[๋‹ต๋ณ€]: {llm.invoke(question)}")

ChatOpenAI๋ฅผ ์‚ฌ์šฉํ•˜๋ฉด ๋ ๊ฑฐ๊ฐ™๋‹ค.

ChatOpenAI๋Š” ๋‹ค์Œํ˜•ํƒœ์˜ ๋ฉ”์„ธ์ง€๋ฅผ ๋ฐ›์„์ˆ˜ ์žˆ๋‹ค.

Message Type

  • system message : ์‹œ์Šคํ…œ์— ๋Œ€ํ•œ ๋ฉ”์„ธ์ง€

  • human message : ์‚ฌ์šฉ์ž๊ฐ€ ์ž…๋ ฅํ•œ ๋ฉ”์„ธ์ง€

  • ai message : ai๊ฐ€ ์ถœ๋ ฅํ•œ ๋ฉ”์„ธ์ง€ (assistant, openai term)

llm = ChatOpenAI(
    temperature=0.1
)

messages = [
    ("system", "๋„ˆ๋Š” ์˜์–ด๋ฅผ ํ•œ๊ตญ์–ด๋กœ ๋ฒˆ์—ญํ•˜๋Š” ์œ ์šฉํ•œ ๋„์šฐ๋ฏธ์ž…๋‹ˆ๋‹ค. ์‚ฌ์šฉ์ž ๋ฌธ์žฅ์„ ๋ฒˆ์—ญํ•ฉ๋‹ˆ๋‹ค."),
    ("human", "I love programming."),
]

ai_msg = llm.invoke(messages)
ai_msg
print(ai_msg.content)

PIPELINE

ํŒŒ์ดํ”„๋ผ์ธ์„ ๋งŒ๋“ค์–ด์„œ ๋‘๊ณ  ์—ฐ๊ฒฐํ•ด๋‘”๋‹ค์Œ. ํ•„์š”ํ• ๋•Œ๋งˆ๋‹ค ๋ณ€์ˆ˜์— ๊ฐ’์„ ๋„ฃ์œผ๋ฉด ์ž๋™์œผ๋กœ ํŒŒ์ดํ”„๋ผ์ธ์„ ํ†ตํ•ด์„œ ๊ฒฐ๊ณผ๊ฐ€ ๋‚˜์˜จ๋‹ค.

from langchain_core.prompts import ChatPromptTemplate

prompt = ChatPromptTemplate.from_messages(
    [
        ("system", "๋„ˆ๋Š” {input_language} ๋ฅผ {output_language} ๋กœ ๋ฒˆ์—ญํ•˜๋Š” ์œ ์šฉํ•œ ๋„์šฐ๋ฏธ์ž…๋‹ˆ๋‹ค. ์‚ฌ์šฉ์ž ๋ฌธ์žฅ์„ ๋ฒˆ์—ญํ•ฉ๋‹ˆ๋‹ค "),
        ("human", "{input}"),
    ]
)

chain = prompt | llm # prompt์™€ llm์„ ์—ฐ๊ฒฐ ํŒŒ์ดํ”„๋ฅผ ๋งŒ๋“ ๋‹ค๊ณ  ์ƒ๊ฐํ•˜๋ฉด๋จ.

answer = chain.invoke(
    {
        "input_language": "English",
        "output_language": "Korean",
        "input": "I love you.",
    }
)
answer
answer.content

chain

๋ง์„ ํ•˜๋Š”๋ฐ ๋ณ€์ˆ˜๋กœ ์ฒ˜๋ฆฌํ•˜๊ณ  ์‹ถ์–ด์„œ template ๊ฐ€ ์ƒ๊ฒผ๋‹ค. ํ”„๋กฌํ”„ํŠธ ํ…œํ”Œ๋ฆฟ์„ ๋งŒ๋“ค๊ณ  ๋ชจ๋ธ๊ณผ ์ฒด์ธ์œผ๋กœ ์—ฐ๊ฒฐํ•œ๋‹ค. Invoke๋ฅผ ํ• ๋•Œ ํ”„๋กฌํ”„ํŠธ์— ๋“ค์–ด๊ฐ€๋Š” ๊ฒƒ๋“ค์€ ๋‹ค ๋„ฃ์–ด์ค€๋‹ค.

LCEL (LangChain Express Language)

https://python.langchain.com/v0.2/docs/concepts/#langchain-expression-language-lcel

LangChain ํ‘œํ˜„ ์–ธ์–ด(LCEL)๋Š” LangChain ๊ตฌ์„ฑ ์š”์†Œ๋ฅผ ์—ฐ๊ฒฐํ•˜๋Š” ์„ ์–ธ์  ๋ฐฉ๋ฒ•์ž…๋‹ˆ๋‹ค

  • stream: ์‘๋‹ต์˜ ์ฒญํฌ๋ฅผ ๋‹ค์‹œ ์ŠคํŠธ๋ฆฌ๋ฐํ•ฉ๋‹ˆ๋‹ค.

  • invoke: ์ž…๋ ฅ์— ๋Œ€ํ•œ ์ฒด์ธ ํ˜ธ์ถœ

  • batch: ์ž…๋ ฅ ๋ชฉ๋ก์— ๋Œ€ํ•œ ์ฒด์ธ ํ˜ธ์ถœ

์ด ๋ฉ”์„œ๋“œ๋“ค์—๋Š” ๋™์‹œ์„ฑ์„ ์œ„ํ•ด asyncio await ๊ตฌ๋ฌธ๊ณผ ํ•จ๊ป˜ ์‚ฌ์šฉํ•ด์•ผ ํ•˜๋Š” ํ•ด๋‹น ๋น„๋™๊ธฐ ๋ฉ”์„œ๋“œ๋„ ์žˆ์Šต๋‹ˆ๋‹ค:

  • astream: ์‘๋‹ต ์ฒญํฌ๋ฅผ ๋น„๋™๊ธฐ์‹์œผ๋กœ ๋‹ค์‹œ ์ŠคํŠธ๋ฆฌ๋ฐํ•ฉ๋‹ˆ๋‹ค.

  • ainvoke: ์ž…๋ ฅ ๋น„๋™๊ธฐ์—์„œ ์ฒด์ธ์„ ํ˜ธ์ถœํ•ฉ๋‹ˆ๋‹ค.

  • abatch: ์ž…๋ ฅ ๋ชฉ๋ก์—์„œ ์ฒด์ธ์„ ๋น„๋™๊ธฐ ํ˜ธ์ถœํ•ฉ๋‹ˆ๋‹ค.

  • astream_log: ์ตœ์ข… ์‘๋‹ต ์™ธ์—๋„ ์ค‘๊ฐ„ ๋‹จ๊ณ„๊ฐ€ ๋ฐœ์ƒํ•  ๋•Œ ์ŠคํŠธ๋ฆผ๋ฐฑํ•ฉ๋‹ˆ๋‹ค.

  • astream_events: ์ฒด์ธ์—์„œ ๋ฐœ์ƒํ•˜๋Š” ๋ฒ ํƒ€ ์ŠคํŠธ๋ฆผ ์ด๋ฒคํŠธ (langchain-core 0.1.14์—์„œ ๋„์ž…)

์ •๋ฆฌํ•˜๋ฉด

  1. Runnable interface ๋งŒ ์•Œ๋ฉด๋˜๊ณ  ๋‚˜์ค‘์— ๊ธฐํšŒ๋˜๋ฉด ์„ค๋ช….

  2. ํŠนํžˆ invoke,batch,stream์„ ์•Œ๋ฉด๋จ. ๊ฐ’์„ ํŒŒ์ดํ”„์— ๋„ฃ์–ด์ฃผ๋Š”๊ฒƒ์ž„.

Invoke

Invokeํ•จ์ˆ˜๋ฅผ ์ด์šฉํ•ด์„œ openai์— ๋ฉ”์„ธ์ง€๋ฅผ ๋ณด๋‚ผ์ˆ˜ ์žˆ๋‹ค.

PromptTemplate ๋Š” chatPromptTemplate๋ณด๋‹ค ๊ฐ„๋‹จํ•ด์„œ ์ผ๋‹ค.

๊ฒฐ๊ณผ๋ฅผ outputParser๋กœ ๋ณด๋‚ผ์ˆ˜์žˆ๋‹ค.

from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import PromptTemplate

# ์งˆ๋ฌธ ํ…œํ”Œ๋ฆฟ ํ˜•์‹ ์ •์˜
template = "{country}์˜ ์ˆ˜๋„๋Š” ๋ญ์•ผ?"

# ํ…œํ”Œ๋ฆฟ ์™„์„ฑ
prompt = PromptTemplate.from_template(template=template)
prompt

chain = prompt | llm | StrOutputParser()
chain.invoke({"country": "๋Œ€ํ•œ๋ฏผ๊ตญ"})
chain.invoke({"country": "์บ๋‚˜๋‹ค"})

batch

์—ฌ๋Ÿฌ๊ฐœ์˜ ์ž…๋ ฅ์„ ํ•œ๋ฒˆ์— ์ฒ˜๋ฆฌ ๊ฐ€๋Šฅ

input_list = [{"country": "ํ˜ธ์ฃผ"}, {"country": "์ค‘๊ตญ"}, {"country": "๋„ค๋œ๋ž€๋“œ"}]
result = chain.batch(input_list)
result
# ๋ฐ˜๋ณต๋ฌธ์œผ๋กœ ๊ฒฐ๊ณผ ์ถœ๋ ฅ
for res in result:
    print(res.strip())

stream: ์‹ค์‹œ๊ฐ„ ์ถœ๋ ฅ

์ŠคํŠธ๋ฆฌ๋ฐ ์˜ต์…˜์€ ์งˆ์˜์— ๋Œ€ํ•œ ๋‹ต๋ณ€์„ ์‹ค์‹œ๊ฐ„์œผ๋กœ ๋ฐ›์„ ๋•Œ ์œ ์šฉํ•ฉ๋‹ˆ๋‹ค.

๋‹ค์Œ๊ณผ ๊ฐ™์ด streaming=True ๋กœ ์„ค์ •ํ•˜๊ณ  ์ŠคํŠธ๋ฆฌ๋ฐ์œผ๋กœ ๋‹ต๋ณ€์„ ๋ฐ›๊ธฐ ์œ„ํ•œ StreamingStdOutCallbackHandler() ์„ ์ฝœ๋ฐฑ์œผ๋กœ ์ง€์ •ํ•ฉ๋‹ˆ๋‹ค.

from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler

# ๊ฐ์ฒด ์ƒ์„ฑ
llm = ChatOpenAI(
    temperature=0,  # ์ฐฝ์˜์„ฑ (0.0 ~ 2.0)
    streaming=True,
    callbacks=[StreamingStdOutCallbackHandler()],
)

# ์งˆ์˜๋‚ด์šฉ
prompt = "๋Œ€ํ•œ๋ฏผ๊ตญ์— ๋Œ€ํ•ด์„œ 300์ž ๋‚ด์™ธ๋กœ ์ตœ๋Œ€ํ•œ ์ƒ์„ธํžˆ ์•Œ๋ ค์ค˜"

# ์ŠคํŠธ๋ฆฌ๋ฐ์œผ๋กœ ๋‹ต๋ณ€ ์ถœ๋ ฅ
response = llm.invoke(prompt)

์—ฌ๋Ÿฌ๊ฐœ chain์„ ์—ฐ๊ฒฐ

์ฒซ๋ฒˆ์งธ ์ฒด์ธ์˜ ๊ฒฐ๊ณผ๋ฅผ ๋‹ค์Œ ์ฒด์ธ์œผ๋กœ ๋„˜๊ธฐ๋Š”๊ฒƒ์ด ๊ฐ€๋Šฅํ•˜๋‹ค.

template = "{country}์˜ ์ˆ˜๋„๋Š” ๋ญ์•ผ?"
prompt = PromptTemplate.from_template(template=template)

chain = prompt | llm

template2 = "{city}์˜ ์ธ๊ตฌ๋Š” ๋ช‡๋ช…์ด์•ผ?"
prompt2 = PromptTemplate.from_template(template=template2)

chain2 = prompt2 | llm

final_chain = {"city": chain} | chain2

final_chain.invoke({"country": "๋Œ€ํ•œ๋ฏผ๊ตญ"})

Runnable interface

https://python.langchain.com/v0.2/docs/concepts/#runnable-interface

prompt๋Š” dictionary๋ฅผ ์ธํ’‹๋ฐ›๊ณ  prompt value๋ฅผ ๋ฆฌํ„ดํ•œ๋‹ค. ๋ชจ๋ธ์ธ Prompt value๋ฅผ ์ธํ’‹๋ฐ›๊ณ  chat message๋ฅผ ๋ฆฌํ„ดํ•œ๋‹ค. output parser๋Š” chat model์„ ์ธํ’‹ ๋ฐ›๊ณ  ํŒŒ์„œ์— ๋”ฐ๋ผ์„œ ๋ฆฌํ„ดํ•œ๋‹ค.

ํ”„๋กฌํ”„ํŠธ ํ…œํ”Œ๋ฆฟ์˜ ํ™œ์šฉ

PromptTemplate : ํ”„๋กฌํ”„ํŠธ ๋ฌธ์ž์—ด์„ ๋งŒ๋“œ๋Š” ๋ฐ ์‚ฌ์šฉ๋˜๋Š” ํ…œํ”Œ๋ฆฟ

  • template: ํ…œํ”Œ๋ฆฟ ๋ฌธ์ž์—ด์ž…๋‹ˆ๋‹ค. ์ด ๋ฌธ์ž์—ด ๋‚ด์—์„œ ์ค‘๊ด„ํ˜ธ {}๋Š” ๋ณ€์ˆ˜๋ฅผ ๋‚˜ํƒ€๋ƒ…๋‹ˆ๋‹ค.

  • input_variables: ์ค‘๊ด„ํ˜ธ ์•ˆ์— ๋“ค์–ด๊ฐˆ ๋ณ€์ˆ˜์˜ ์ด๋ฆ„์„ ๋ฆฌ์ŠคํŠธ๋กœ ์ •์˜ํ•ฉ๋‹ˆ๋‹ค.

# ์งˆ๋ฌธ ํ…œํ”Œ๋ฆฟ ํ˜•์‹ ์ •์˜
template = "{country}์˜ ์ˆ˜๋„๋Š” ๋ญ์•ผ?"

# ํ…œํ”Œ๋ฆฟ ์™„์„ฑ
prompt = PromptTemplate.from_template(template=template)
prompt
chain = prompt | llm
print(chain.invoke({"country": "ํ•œ๊ตญ"}))

2๊ฐœ ์ด์ƒ์˜ ๋ณ€์ˆ˜๋ฅผ ํ…œํ”Œ๋ฆฟ ์•ˆ์— ์ •์˜

# ์งˆ๋ฌธ ํ…œํ”Œ๋ฆฟ ํ˜•์‹ ์ •์˜
template = "{area1} ์™€ {area2} ์˜ ์‹œ์ฐจ๋Š” ๋ช‡์‹œ๊ฐ„์ด์•ผ?"

# ํ…œํ”Œ๋ฆฟ ์™„์„ฑ
prompt = PromptTemplate.from_template(template)
prompt
chain = prompt | llm

print(chain.invoke({"area1": "์„œ์šธ", "area2": "ํŒŒ๋ฆฌ"}))

input_list = [
    {"area1": "ํŒŒ๋ฆฌ", "area2": "๋‰ด์š•"},
    {"area1": "์„œ์šธ", "area2": "ํ•˜์™€์ด"},
    {"area1": "์ผ„๋ฒ„๋ผ", "area2": "๋ฒ ์ด์ง•"},
]

# ๋ฐ˜๋ณต๋ฌธ์œผ๋กœ ๊ฒฐ๊ณผ ์ถœ๋ ฅ
result = chain.batch(input_list)
for res in result:
    print(res.content.strip())

PromptTemplate vs ChatPromptTemplate

  • PromptTemplate๋Š” ์ผ๋ฐ˜์ ์ธ ์ž‘์—…์— ์‚ฌ์šฉ

  • ChatPromptTemplate๋Š” ๋Œ€ํ™”ํ˜• ์ž‘์—…์— ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค.

from langchain_core.prompts import ChatPromptTemplate

template = "Hello, how are you doing {name}"

prompt = ChatPromptTemplate.from_template(template)

chain = prompt | llm
print(chain.invoke({"name": "John"}))
from langchain_core.prompts import ChatPromptTemplate

message = [
    ("system", "You are a helpful AI bot. Your name is {name}."),
    ("human", "Hello, how are you doing?"),
    ("ai", "I'm doing well, thanks!"),
    ("human", "{user_input}"),
]

prompt = ChatPromptTemplate.from_messages(message)

chain = prompt | llm

print(chain.invoke({"name": "John", "user_input": "What is the weather like today?"}))

ChatPromptTemplate์€ ์ƒ˜ํ”Œ์ฒ˜๋Ÿผ system,human,ai ๋ฉ”์„ธ์ง€๋ฅผ ํฌํ•จํ•˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. PromptTemplate์€ ๋‹จ์ˆœํ•œ ํ…์ŠคํŠธ๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค.

output parser

์•„์›ƒํ’‹์„ ํŒŒ์‹ฑํ•ด์ค€๋‹ค.

https://python.langchain.com/v0.2/docs/concepts/#output-parsers

json parser : https://python.langchain.com/v0.2/docs/how_to/output_parser_json/

from langchain_core.output_parsers import JsonOutputParser
# from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import PromptTemplate
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain_openai import ChatOpenAI

# Define your desired data structure.
class Joke(BaseModel):
    setup: str = Field(description="๋†๋‹ด์„ ์ค€๋น„ํ•˜๋Š” ์งˆ๋ฌธ")
    punchline: str = Field(description="๋†๋‹ด์˜ ๋‹ต๋ณ€")

model = ChatOpenAI(temperature=0)

joke_query = "๋†๋‹ด์„ ์ด์•ผ๊ธฐํ•ด์ค˜, ๋‹ต๋ณ€์„ ํ•œ๊ตญ๋ง๋กœ ์ค˜"

parser = JsonOutputParser(pydantic_object=Joke)
# parser = StrOutputParser()

prompt = PromptTemplate(
    template="Answer the user query.\n{format_instructions}\n{query}\n",
    input_variables=["query"],
    partial_variables={
        "format_instructions": parser.get_format_instructions()},
)

chain = prompt | model | parser # chain์œผ๋กœ ์ƒ์„ฑํ•ด์„œ ์—ฐ๊ฒฐ

chain.invoke({"query": joke_query})

๊ทธ๋Ÿฐ๋ฐ ์ฃผ์˜์‚ฌํ•ญ์ด ์žˆ๋‹ค. https://python.langchain.com/v0.2/docs/concepts/#output-parsers

It is recommended to use function/tool calling rather than output parsing.

function calling์„ ์“ฐ์ž.

Prompt engineering

  • Zero Shot : ๋ชจ๋ธ์—๊ฒŒ ์˜ˆ์ œ๋ฅผ ์ œ๊ณตํ•˜์ง€ ์•Š์Œ

Add 2+2:
  • One Shot : ๋ชจ๋ธ์—๊ฒŒ ์˜ˆ์ œ๋ฅผ ํ•œ๋ฒˆ๋งŒ ์ œ๊ณต

Add 3+3: 6
Add 2+2:
  • Few Shot : ๋ชจ๋ธ์—๊ฒŒ ์˜ˆ์ œ๋ฅผ ๋ช‡๋ฒˆ ์ œ๊ณต

Add 3+3: 6
Add 5+5: 10
Add 2+2:

์ •๋ฆฌ

ํ˜„์žฌ ๊ตฌ์กฐ

langchain์—์„œ openai๋กœ ๋ฉ”์„ธ์ง€๋ฅผ ๋ณด๋‚ด๋Š” ๋ฐฉ๋ฒ•์„ ๋ฐฐ์› ๋‹ค.

๋‹ค์Œ ๊ฐ•์˜๋Š” slack์—์„ธ ๋ฉ”์„ธ์ง€๋ฅผ ๋ณด๋‚ด๋ณด์ž.

๊ธฐํƒ€ ๊ณ ๋ฏผ๋“ค

์™œ langchain์„ ์จ์•ผํ• ๊ฐ€?

์™œ ์ง์ ‘ openai api ์‚ฌ์šฉํ•˜์ง€ ์•Š๋Š”๊ฐ€?

๋‚œ ์–ธ์ œ๋“  ๋ชจ๋ธ์„ ๋ฐ”๊ฟ€์ˆ˜ ์žˆ์œผ๋ฉด ์ข‹๊ฒŸ๋‹ค. langchain์—์„œ ๋ชจ๋ธ์„ ๋ฐ”๊พธ๋Š”๊ฒƒ์€ ์•„์ฃผ ์‰ฝ๋‹ค.

๊ทธ๋ž˜์„œ langchain ์„ ์‚ฌ์šฉ์ค‘์ด๊ธฐ๋Š” ํ•œ๋ฐ.. ๊ฐ์ž ๋งˆ์Œ๋Œ€๋กœ ํ•˜๊ธฐ ๋ฐ”๋ž€๋‹ค. ์ € ์Šค์Šค๋กœ๋„ ์•„์ง ๋‹ต์„ ๋ชป๋‚ด๋ฆผ. ์™”๋‹ค๊ฐ”๋‹ค ํ•˜๋ฃจ์— ๋งˆ์Œ์— 10๋ฒˆ ๋ฐ”๋€œ.

๊ตฌ๊ธ€ AI๋ฅผ ์จ๋ณด์ž.

  1. google ai studio ์— ์ ‘์†ํ•ด์„œ api Key๋ฅผ ๋ฐ›๋Š”๋‹ค.

  2. .env ํŒŒ์ผ์— ์ถ”๊ฐ€ํ•œ๋‹ค. GOOGLE_API_KEY=xxx

%pip install  --user -Uq   langchain-google-genai pillow
# api key load
from dotenv import load_dotenv
load_dotenv()
# from langchain_openai import ChatOpenAI
from langchain_google_genai import ChatGoogleGenerativeAI

# llm = ChatOpenAI(
llm = ChatGoogleGenerativeAI(
    model="gemini-pro",  # ๋ชจ๋ธ ์ด๋ฆ„
    temperature=0,  # ์ฐฝ์˜์„ฑ (0.0 ~ 2.0)
)

# ์งˆ์˜๋‚ด์šฉ
question = "๋Œ€ํ•œ๋ฏผ๊ตญ์˜ ์ˆ˜๋„๋Š” ์–ด๋””์ธ๊ฐ€์š”?"

# ์งˆ์˜
print(f"[๋‹ต๋ณ€]: {llm.invoke(question)}")

์ž˜ ๋˜๋Š”๊ฒƒ์„ ๋ณผ์ˆ˜ ์žˆ๋‹ค.

์ด๋ ‡๊ฒŒ ์‰ฝ๊ธฐ ๋•Œ๋ฌธ์— langchain์„ ์‚ฌ์šฉํ•œ๋‹ค. ๊ทธ๋Ÿฐ๋ฐ openai๋งŒ ์“ธ๊ฑฐ๋‹ค ํ•˜๋ฉด ๊ตฌ์ง€ layer๋ฅผ ๋” ๋‘ฌ์•ผํ•˜๋‚˜? ์‹ถ๊ธดํ•œ๋ฐ.

Previouslangchain์„ ๊ณต๋ถ€ํ•˜๋ฉฐ ์ •๋ฆฌNextslackbot

Last updated 10 months ago

Was this helpful?

alt text
alt text