I am getting the following error when trying to run Ollama with LLama3 and invoking the model from LangChain (python)
langchain_community.llms.ollama.OllamaEndpointNotFoundError: Ollama call failed with status code 404. Maybe your model is not found and you should pull the model with `ollama pull llama3`.
Context:
from langchain_community.llms import Ollama
llm = Ollama(model="llama3", base_url="http://localhost:11434/")
llm.invoke("Why is the sky blue?")
Tried running Ollama as a service with ollama serve (It does not seem to make a difference)
I am able to see that Ollama is running on the localhost:11434
I am getting a 404 error when I try to access localhost:11434/llama3
ollama list shows llama 3 installed
I did faced the same issue with both llama3 and llama2 on my Mac
Here is how it got resolved-
llm = Ollama(model="llama3", base_url="http://localhost:11434")
It works for me in Jupyter notebook when I set model="llama2"
llm = Ollama(model="llama2")
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With