Questions tagged [llm]

A general tag for large language model (LLM)-related subjects. Please ALWAYS use the more specific tags if available (GPT variants, PaLM , LLaMa, BLOOM, Claude etc..)

A general tag for large language model (LLM)-related subjects. Please ALWAYS use the more specific tags if available (GPT variants, PaLM , LLaMa, BLOOM, Claude etc..)

A large language model is characterized by its large size. Their AI accellerator networks are able to process huge amounts of text data, usually scraped from the internet.

200 questions
0
votes
1 answer

How to load an .mdl file in Python?

I plan to fine tune a GPT transformer model with a custom dataset, specifically with an EmpatheticDialogues dataset, for my chatbot. The repository provides an .mdl file to their project. How will I able to load these .mdl files in my ipynb file? Is…
ss1
  • 43
  • 7
-1
votes
0 answers

Security of database credentials connecting Langchain with OpenAI

In LangChain you define the connection credentials to the database. Finally the "connection string" is passed to the OpenAI models. I consider it dangerous to share with OpenAI. server = 'server-name' database = 'db-name' username =…
Pool Nolasco
  • 93
  • 1
  • 6
-1
votes
0 answers

Error : from .rvlcdip import RvlCdipDataset, get_rvlcdip_labels ImportError: attempted relative import with no known parent package

I am running a init.py and it only contain one line "from .rvlcdip import RvlCdipDataset, get_rvlcdip_labels " it show me error of "ImportError: attempted relative import with no known parent package" . i simply run this init.py file
-1
votes
0 answers

How to finetune a LLM to make it have ability to solve multiple choice problem

I'm a novice learner in LLM area. And I want to finetune a LLM(7B params), named 'Moss', to make it have the ability to answer multiple choice questions. The pretrained model and pretrained tokenizer are provided by Huggingface. And I integrate…
-1
votes
1 answer

How to achieve Text Embedding by BERT?

I am tring to build a Text Embedding function by BERT. It said that BERT can do text embedding. However, I cannot find the embedding function on BERT's tutorial. Here is the link I looked up:…
-1
votes
0 answers

What's the role of -qU in this code: "!pip install -qU transformers accelerate einops langchain xformers bitsandbytes faiss-gpu sentence_transformers"

I am following this blog and in the coding part they are mentioned installing libraries at the very beginning. In the first line what's the use of '-qU', what does it do specifically here? I tried searching for documentation but didn't get it. It…
-1
votes
1 answer

How to force falcon 40B to print in JSON format?

I have been trying to extract start time and end time from the input text using Falcon 40B. This was my prompt, Identify the following items from the given text which states random shipping details: start_time end_time The input text is…
-1
votes
1 answer

Interact oobabooga webui running in Collab with my local pc

How to interact with oogabooga webui with my python terminal? I am running wizard models or the Pygmalion model. Have anyone tried this before if yes can u provide the code? I have tried to do it with the api examples gives in his GitHub but I…
-1
votes
1 answer

override langchain llm completion call to handle 400 moderated response

Using langchain==0.0.198, langchainplus-sdk==0.0.8, python==3.10 I am working with openAI, Azure. Currently, the moderation endpoint that was available in the non-Azure openAI is no longer available with Azure openAI, and has been "bundled" with the…
camelBack
  • 748
  • 2
  • 11
  • 30
-2
votes
0 answers

Cannot install llamacpp module provided by langchain

n_gpu_layers = 32 # Change this value based on your model and your GPU VRAM pool. n_batch = 256 # Should be between 1 and n_ctx, consider the amount of VRAM in your GPU. # Loading model, llm = LlamaCpp( …
-2
votes
1 answer

Predicting next questions in llm powered chatbot

I am building a question answering chatbot powered by llms. I have seen in chatbots like bing chat it predicts what might be the top three next questions user may ask. My question is: How would I do the same in my chatbot? I have implemented the qa…
-2
votes
1 answer

Any good model for LLM to downloand and use it in local PC?

Looking for a good LLM model to download and use it in the local PC. Tried the facebook LLaMa model and other models are not that great in terms of the accuracy and respopnse. Looking for a a good model to recommend. Looking to use it in a high end…
Nidhi
  • 1
-2
votes
0 answers

Will Inconsistent Alternation of Responses Affect Fine-Tuning LLAMA2 with Chat History

I am working on fine-tuning LLAMA2 with a dataset containing chat history. While preparing the data, I've noticed that the dialogue doesn't always follow a pattern of alternating responses between speakers. In some cases, one person responds several…
Ivo Oostwegel
  • 374
  • 2
  • 20
-2
votes
1 answer

Building a Closed-Domain Legal Language Model with LLaMA 2 7B: Pretraining vs. Finetuning, Optimization Strategies, and Feasibility

I'm attempting to build a closed-domain language model specifically tailored to legal services, essentially emulating a standalone lawyer. My approach involves pretraining the LLaMA 2 7B model, focusing only on the legal domain, and then fine-tuning…
-2
votes
0 answers

Chunk the text using Contextual method chunking

I've a PDF, the pdf will be read using the python package and store the text. I need to chunk the text using contextual method.
Jacob Gokul
  • 148
  • 1
  • 1
  • 6
1 2 3
13
14