chain_type
is important
run
call()
get_docs()
get relevant documents under max token length https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/chains/qa_with_sources/retrieval.py- question 으로만 source 찾는 한계
- hf native보다 너무 느림 because of multi inference in combine strategy
- input_documents are mapped in
BaseCombineDocumentsChain.combine_docs()
variable initialization and mapping function in https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/chains/qa_with_sources/loading.py
combine_docs()
함수로 combinestuff
- Stuff all documents into one prompt and pass to LLMrefine
-map_reduct
- use stuff, mapreduce, reduce all so complicate implementation
- https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/chains/llm.py
- 에서 infrerence with doc
- predict()
- _call()
- generate()
- prepare prompts
- https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/llms/base.py
generate_prompt
generate()