Skip to main content

Connect and verify your setup

Here, we'll perform basic operations to communicate with Weaviate using the Python client library.

Check Weaviate status

You can check whether the Weaviate instance is up and ready to use by calling the is_ready method.

# Instantiate your client (not shown). e.g.:
# client = weaviate.connect_to_weaviate_cloud(...) or
# client = weaviate.connect_to_local(...)

assert client.is_ready() # This will raise an exception if the client is not ready

Retrieve server meta information

You can retrieve meta information about the Weaviate instance using the meta function.

import json

# Instantiate your client (not shown). e.g.:
# client = weaviate.connect_to_weaviate_cloud(...) or
# client = weaviate.connect_to_local(...)

metainfo = client.get_meta()
print(json.dumps(metainfo, indent=2)) # Print the meta information in a readable format

This will print the server meta information to the console. The output will look similar to the following:

Example get_meta output
{
"hostname": "http://[::]:8080",
"modules": {
"backup-gcs": {
"bucketName": "weaviate-wcs-prod-cust-europe-west2-workloads-backups",
"rootName": "8616b69e-f8d2-4547-ad92-70b9557591c0"
},
"generative-aws": {
"documentationHref": "https://docs.aws.amazon.com/bedrock/latest/APIReference/welcome.html",
"name": "Generative Search - AWS"
},
"generative-cohere": {
"documentationHref": "https://docs.cohere.com/reference/generate",
"name": "Generative Search - Cohere"
},
"generative-openai": {
"documentationHref": "https://platform.openai.com/docs/api-reference/completions",
"name": "Generative Search - OpenAI"
},
"generative-palm": {
"documentationHref": "https://cloud.google.com/vertex-ai/docs/generative-ai/chat/test-chat-prompts",
"name": "Generative Search - Google PaLM"
},
"qna-openai": {
"documentationHref": "https://platform.openai.com/docs/api-reference/completions",
"name": "OpenAI Question & Answering Module"
},
"ref2vec-centroid": {},
"reranker-cohere": {
"documentationHref": "https://txt.cohere.com/rerank/",
"name": "Reranker - Cohere"
},
"text2vec-aws": {
"documentationHref": "https://cloud.google.com/vertex-ai/docs/generative-ai/embeddings/get-text-embeddings",
"name": "AWS Module"
},
"text2vec-cohere": {
"documentationHref": "https://docs.cohere.ai/embedding-wiki/",
"name": "Cohere Module"
},
"text2vec-huggingface": {
"documentationHref": "https://huggingface.co/docs/api-inference/detailed_parameters#feature-extraction-task",
"name": "Hugging Face Module"
},
"text2vec-jinaai": {
"documentationHref": "https://jina.ai/embeddings/",
"name": "JinaAI Module"
},
"text2vec-openai": {
"documentationHref": "https://platform.openai.com/docs/guides/embeddings/what-are-embeddings",
"name": "OpenAI Module"
},
"text2vec-palm": {
"documentationHref": "https://cloud.google.com/vertex-ai/docs/generative-ai/embeddings/get-text-embeddings",
"name": "Google PaLM Module"
}
},
"version": "1.23.8"
}

Close the connection

After you have finished using the Weaviate client, you should close the connection. This frees up resources and ensures that the connection is properly closed.

We suggest using a try-finally block as a best practice. For brevity, we will not include the try-finally blocks in the remaining code snippets.

import weaviate
import os

# Instantiate your client (not shown). e.g.:
# client = weaviate.connect_to_weaviate_cloud(...) or
# client = weaviate.connect_to_local(...)

try:
# Work with the client here - e.g.:
assert client.is_ready()
pass

finally: # This will always be executed, even if an exception is raised
client.close() # Close the connection & release resources
What's next?

You have confirmed that your Weaviate instance is running and connected to the Python client library. You can now proceed to the next module to populate the Weaviate instance with data.

Login to track your progress