This notebook demonstrates how to build a semantic search application using OpenAI and MongoDB Atlas vector search
This notebook demonstrates how to build a semantic search application using OpenAI and MongoDB Atlas vector search
!pip install pymongo openai
Collecting pymongo Downloading pymongo-4.6.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (677 kB) [2K [90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━[0m [32m677.1/677.1 kB[0m [31m10.3 MB/s[0m eta [36m0:00:00[0m [?25hCollecting openai Downloading openai-1.3.3-py3-none-any.whl (220 kB) [2K [90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━[0m [32m220.3/220.3 kB[0m [31m24.4 MB/s[0m eta [36m0:00:00[0m [?25hCollecting dnspython<3.0.0,>=1.16.0 (from pymongo) Downloading dnspython-2.4.2-py3-none-any.whl (300 kB) [2K [90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━[0m [32m300.4/300.4 kB[0m [31m29.0 MB/s[0m eta [36m0:00:00[0m [?25hRequirement already satisfied: anyio<4,>=3.5.0 in /usr/local/lib/python3.10/dist-packages (from openai) (3.7.1) Requirement already satisfied: distro<2,>=1.7.0 in /usr/lib/python3/dist-packages (from openai) (1.7.0) Collecting httpx<1,>=0.23.0 (from openai) Downloading httpx-0.25.1-py3-none-any.whl (75 kB) [2K [90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━[0m [32m75.0/75.0 kB[0m [31m9.8 MB/s[0m eta [36m0:00:00[0m [?25hRequirement already satisfied: pydantic<3,>=1.9.0 in /usr/local/lib/python3.10/dist-packages (from openai) (1.10.13) Requirement already satisfied: tqdm>4 in /usr/local/lib/python3.10/dist-packages (from openai) (4.66.1) Requirement already satisfied: typing-extensions<5,>=4.5 in /usr/local/lib/python3.10/dist-packages (from openai) (4.5.0) Requirement already satisfied: idna>=2.8 in /usr/local/lib/python3.10/dist-packages (from anyio<4,>=3.5.0->openai) (3.4) Requirement already satisfied: sniffio>=1.1 in /usr/local/lib/python3.10/dist-packages (from anyio<4,>=3.5.0->openai) (1.3.0) Requirement already satisfied: exceptiongroup in /usr/local/lib/python3.10/dist-packages (from anyio<4,>=3.5.0->openai) (1.1.3) Requirement already satisfied: certifi in /usr/local/lib/python3.10/dist-packages (from httpx<1,>=0.23.0->openai) (2023.7.22) Collecting httpcore (from httpx<1,>=0.23.0->openai) Downloading httpcore-1.0.2-py3-none-any.whl (76 kB) [2K [90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━[0m [32m76.9/76.9 kB[0m [31m7.9 MB/s[0m eta [36m0:00:00[0m [?25hCollecting h11<0.15,>=0.13 (from httpcore->httpx<1,>=0.23.0->openai) Downloading h11-0.14.0-py3-none-any.whl (58 kB) [2K [90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━[0m [32m58.3/58.3 kB[0m [31m6.8 MB/s[0m eta [36m0:00:00[0m [?25hInstalling collected packages: h11, dnspython, pymongo, httpcore, httpx, openai [31mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. llmx 0.0.15a0 requires cohere, which is not installed. llmx 0.0.15a0 requires tiktoken, which is not installed.[0m[31m [0mSuccessfully installed dnspython-2.4.2 h11-0.14.0 httpcore-1.0.2 httpx-0.25.1 openai-1.3.3 pymongo-4.6.0
There are 2 pre-requisites for this:
MongoDB Atlas cluster: To create a forever free MongoDB Atlas cluster, first, you need to create a MongoDB Atlas account if you don't already have one. Visit the MongoDB Atlas website and click on “Register.” Visit the MongoDB Atlas dashboard and set up your cluster. In order to take advantage of the $vectorSearch
operator in an aggregation pipeline, you need to run MongoDB Atlas 6.0.11 or higher. This tutorial can be built using a free cluster. When you’re setting up your deployment, you’ll be prompted to set up a database user and rules for your network connection. Please ensure you save your username and password somewhere safe and have the correct IP address rules in place so your cluster can connect properly. If you need more help getting started, check out our tutorial on MongoDB Atlas.
OpenAI API key To create your OpenAI key, you'll need to create an account. Once you have that, visit the OpenAI platform. Click on your profile icon in the top right of the screen to get the dropdown menu and select “View API keys”.
import getpass
MONGODB_ATLAS_CLUSTER_URI = getpass.getpass("MongoDB Atlas Cluster URI:")
OPENAI_API_KEY = getpass.getpass("OpenAI API Key:")
MongoDB Atlas Cluster URI:·········· OpenAI API Key:··········
Note: After executing the step above you will be prompted to enter the credentials.
For this tutorial, we will be using the MongoDB sample dataset. Load the sample dataset using the Atlas UI. We'll be using the “sample_mflix” database, which contains a “movies” collection where each document contains fields like title, plot, genres, cast, directors, etc.
import openai
import pymongo
client = pymongo.MongoClient(MONGODB_ATLAS_CLUSTER_URI)
db = client.sample_mflix
collection = db.movies
openai.api_key = OPENAI_API_KEY
ATLAS_VECTOR_SEARCH_INDEX_NAME = "default"
EMBEDDING_FIELD_NAME = "embedding_openai_nov19_23"
model = "text-embedding-3-small"
def generate_embedding(text: str) -> list[float]:
return openai.embeddings.create(input = [text], model=model).data[0].embedding
Each document in the sample dataset sample_mflix.movies corresponds to a movie; we will execute an operation to create a vector embedding for the data in the "plot" field and store it in the database. Creating vector embeddings using OpenAI embeddings endpoint is necessary for performing a similarity search based on intent.
from pymongo import ReplaceOne
# Update the collection with the embeddings
requests = []
for doc in collection.find({'plot':{"$exists": True}}).limit(500):
doc[EMBEDDING_FIELD_NAME] = generate_embedding(doc['plot'])
requests.append(ReplaceOne({'_id': doc['_id']}, doc))
collection.bulk_write(requests)
BulkWriteResult({'writeErrors': [], 'writeConcernErrors': [], 'nInserted': 0, 'nUpserted': 0, 'nMatched': 50, 'nModified': 50, 'nRemoved': 0, 'upserted': []}, acknowledged=True)
After executing the above, the documents in "movies" collection will contain an additional field of "embedding", as defined by the EMBEDDDING_FIELD_NAME
variable, apart from already existing fields like title, plot, genres, cast, directors, etc.
Note: We are restricting this to just 500 documents in the interest of time. If you want to do this over the entire dataset of 23,000+ documents in our sample_mflix database, it will take a little while. Alternatively, you can use the sample_mflix.embedded_movies collection which includes a pre-populated plot_embedding
field that contains embeddings created using OpenAI's text-embedding-3-small
embedding model that you can use with the Atlas Search vector search feature.
We will create Atlas Vector Search Index on this collection which will allow us to perform the Approximate KNN search, which powers the semantic search. We will cover 2 ways to create this index - Atlas UI and using MongoDB python driver.
(Optional) Documentation: Create a Vector Search Index
Now head over to Atlas UI and create an Atlas Vector Search index using the steps descibed here. The 'dimensions' field with value 1536, corresponds to openAI text-embedding-ada002.
Use the definition given below in the JSON editor on the Atlas UI.
{
"mappings": {
"dynamic": true,
"fields": {
"embedding": {
"dimensions": 1536,
"similarity": "dotProduct",
"type": "knnVector"
}
}
}
}
(Optional) Alternatively, we can use pymongo driver to create these vector search indexes programatically The python command given in the cell below will create the index (this only works for the most recent version of the Python Driver for MongoDB and MongoDB server version 7.0+ Atlas cluster).
collection.create_search_index(
{"definition":
{"mappings": {"dynamic": True, "fields": {
EMBEDDING_FIELD_NAME : {
"dimensions": 1536,
"similarity": "dotProduct",
"type": "knnVector"
}}}},
"name": ATLAS_VECTOR_SEARCH_INDEX_NAME
}
)
'default'
The results for the query here finds movies which have semantically similar plots to the text captured in the query string, rather than being based on the keyword search.
(Optional) Documentation: Run Vector Search Queries
def query_results(query, k):
results = collection.aggregate([
{
'$vectorSearch': {
"index": ATLAS_VECTOR_SEARCH_INDEX_NAME,
"path": EMBEDDING_FIELD_NAME,
"queryVector": generate_embedding(query),
"numCandidates": 50,
"limit": 5,
}
}
])
return results
query="imaginary characters from outerspace at war with earthlings"
movies = query_results(query, 5)
for movie in movies:
print(f'Movie Name: {movie["title"]},\nMovie Plot: {movie["plot"]}\n')