Questions tagged [llama-index]
112 questions
0
votes
1 answer
Do nodes in List Index come with embedding vectors in LlamaIndex?
One can run an embedding-based query on List Index (link). For that nodes in the List Index should be supplied with embedding vectors. What is then the difference between the List Index and the Vector Store Index? I thought that the distinctive…

Roman
- 124,451
- 167
- 349
- 456
0
votes
0 answers
How to pass a prompt template to GPT Index Method
I am trying to connect huggingface model with external data using GPTListIndex.
GPTListIndex(documents,llm_predictor=llm_predictor)
I want to use a prompt also. Here is prompt template.
example_prompt = PromptTemplate(
input_variables=["Query",…

Talha Anwar
- 2,699
- 4
- 23
- 62
-1
votes
1 answer
ChatGPT API Custom-trained AI Chatbot answering "None" to Python Query
I'm connecting to my first chatbot. Based on the process outlined here:
https://beebom.com/how-train-ai-chatbot-custom-knowledge-base-chatgpt-api/
I created the code he suggested to get ChatGPT to analyze my PDF. The code was a bit outdated though,…

Geoff L
- 765
- 5
- 22
-1
votes
1 answer
how can I import llama in python
I tried to install llama with pip:
pip install llama
But I got:
Collecting llama
Using cached llama-0.1.1.tar.gz (387 kB)
Installing build dependencies ... done
Getting requirements to build wheel ... error
error:…

w s
- 11
-1
votes
1 answer
How to fix `transformers` package not found error in a Python project with `py-langchain`, `llama-index`, and `gradio`?
I get this error ModuleNotFoundError: No module named 'transformers' even though I did the pip install transformers command. Could you kindly help me? Thank you
My code:
import os
import sys
from dotenv import load_dotenv
import gradio as gr
from…
-2
votes
0 answers
Cannot install llamacpp module provided by langchain
n_gpu_layers = 32 # Change this value based on your model and your GPU VRAM pool.
n_batch = 256 # Should be between 1 and n_ctx, consider the amount of VRAM in your GPU.
# Loading model,
llm = LlamaCpp(
…