- Pondhouse Data OG - We know data & AI
- Posts
- Pondhouse Data AI - Edition 6
Pondhouse Data AI - Edition 6
Llama 3.1 8B: the first outstanding local-first LLM | Tutorial: How to create knowledge graphs | CopilotKit - add copilots to your app with just one line of code

Hey there,
We’re excited to bring you the 6th edition of our Pondhouse AI newsletter — your source for tips and tricks around AI and LLMs. Whether you want to learn about AI concepts, use AI tools effectively, or see inspiring examples, we’ve got you covered.
Let’s get started!
Cheers, Andreas & Sascha
In todays edition:
News: Llama 3.1 8B is the first ever superb local-first LLM model
Tutorial: Create and understand knowledge graphs with the Neo4j knowledge graph builder
Tip of the Week: Want more control over your AI pipelines? Add a human-in-the-loop
Tool of the Week: CopilotKit - a framework to easily add copilot functionality to your apps
Find this Newsletter helpful?
Please forward it to your colleagues and friends - it helps us tremendously.
Top News
Llama 3.1 paves the road for many highly efficient LLM applications
With much anticipation, Meta released their latest series of LLM models - simply named Llama 3.1.
The flagship model is certainly their Llama 3.1 405B model, which is the first openly available model that rivals the top AI models when it comes to state-of-the-art capabilities in general knowledge, steerability, math, tool use, and multilingual translation. In most benchmarks it is on par with or even exceeds GPT-4 and GPT-4o - which is a major victory for the open source community.
Key Features:
Context length of 128K
Stronger reasoning capabilities
Multiple sizes available: 405B, 70B and 8B parameters
Performance Metrics: As metas own benchmarks as well as many independent ones confirm, Llama 3.1 405B is one of the best models available at the moment.

Why are we excited? It’s not just because Open Source won.
While its certainly great to see that Open Source AI was able to close the gap to closed source models, we are most excited about the smaller models in the LLama 3.1 herd of models - and especially the Llama 3.1 8B model.
Why you ask? Because this comparably tiny model beats GPT-3.5-Turbo in almost any benchmark. Furthermore, 8B parameter models can run on rather cheap hardware and have very low output latency. This means, we finally have a great model for local-first AI applications.
For more details, read the full announcement here.
Tutorials & Use Cases
Create knowledge graphs with Neo4j LLM knowledge graph builder
Knowledge graphs have become essential tools for data management and analysis in 2023. These powerful data structures offer a way to connect, visualize and retrieve complex information. Neo4j, arguably the leading graph database platform, has recently introduced the LLM Knowledge Graph Builder, designed to simplify the creation of knowledge graphs for Retrieval-Augmented Generation Retrieval-Augmented Generation (RAG). And also better understand knowledge graphs.

Knowledge graphs might be the hidden piece of technology needed to move RAG applications from quite good to remarkable!
Historically, creating knowledge graphs was quite complex and time consuming. Using modern LLMs however made this process rather easy. What was still missing is the explorability of these graph. Anybody could create graphs, but most people did not understand what exactly happened within this new data structure.
The LLM knowledge graph builder changes that:
Features:
Create knowledge graphs using OpenAI GPT models
Store the graph in Neo4j
Visualize and explore the graph with a user-friendly web UI
Use the knowledge graph for retrieval augmented generation
Compare knowledge graph based RAG with vector-only RAG
For a comprehensive guide, read the full tutorial here.
Also in the news
OpenAI announces “SearchGPT” - a Google and Perplexity AI rival for AI-enhanced web-search
The creators of GPT-4 are testing a preliminary version of “SearchGPT” - their attempt in using AI to enhance web search.
SearchGPT should properly cite and link to content publishers in searches. Responses have clear, in-line, named attribution and links so users know where information is coming from and can quickly engage with even more results in a sidebar with source links.
For more details, read the full article here.
Meta reportedly withholding new AI models in EU
Meta announced, that they will not release their upcoming multimodal AI model in the EU due to “regulatory concerns”. They will however release their text-only models.
On twitter, Yann LeCun, head of Metas AI program added, that they’ll consider releasing their models in the EU, if the EU agrees to them being allowed to train their models on users image data.
Read the full article here.
JP Morgan launches inhouse chat-bot as research analyst
JPMorgan Chase has launched an AI-powered chatbot called LLM Suite to enhance the productivity of its asset and wealth management division. This AI tool is designed to perform research analyst duties, such as writing, idea generation, and document summarization. Approximately 50,000 employees have access to this tool, which aims to streamline operations and improve efficiency. This move places JPMorgan in direct competition with Morgan Stanley, which has also implemented a similar AI-driven chatbot in collaboration with OpenAI
For more details, read the full article here.
Tip of the week
Want more control over your AI pipelines? Add a human-in-the-loop
(Agentic) AI systems are great - until they aren’t. Companies need ways to benefit from the huge automation capabilities of modern AI system, but at the same time require some control over certain steps in the whole workflow.
That’s where human-in-the-loop comes into play.
Human-in-the-loop - as the name suggests - is a concept where otherwise automated processes are halted until a human confirms or provides input. With AI systems, we have two options:
Add human interaction requirements at certain steps in the pipeline
Allow the AI agent itself to ask for human input
Especially the second option is highly interesting, as we can use the quite good reasoning abilities of modern LLMs to “ask for help”.
Llama Index - one of the major LLM integration frameworks on the market - offers a nice and easy interface for doing just that: Providing the AI model an interface to us humans.
Tool of the week
CopilotKit
Everyone and their mum is integrating Copilots in their tool lately. Rightfully so - these chat-like interface can help tremendously.
CopilotKit is an open source copilot framework, helping to build, deploy, and operate fully custom AI Copilots.

Why CopilotKit?
CopilotChat: An app-aware AI chatbot that can interact with users within the application context, providing relevant responses based on real-time data.
CopilotTextarea: AI-powered writing assistance integrated directly into your app, facilitating tasks like summarizing content.
In-app Agents (via LangChain): Allows the integration of standalone LLM chains and graphs as interactive components in your application with minimal coding effort.
Co-Agents: Enables end-users to review and correct an agent’s internal operations, enhancing user experience and reliability.
Context-aware: Copilots are informed by real-time application context from various sources (frontend, backend, or third-party systems) to provide more accurate and relevant assistance.
Actionable: Copilots can initiate actions across frontend, backend, or third-party systems, making them highly functional and versatile.
Generative UI: Offers the ability to display custom UX components within the chat, with built-in support for streaming actions, tools, and agents.
CopilotTask: Allows binding of Copilot interactions to native application UX, enabling interaction through either chat UX or native UX.
How it Works
Define the following simple entry-points into your application, and the CopilotKit execution engine takes care of the rest
Application state (frontend + backend + 3rd party)
Application interaction (via plain typescript code, frontend + backend)
Purpose-specific LLM chains
The github page linked below even provides remarkable getting-started examples.
For more information and examples, visit the ConvertKit github page .
We hope you liked our newsletter and you stay tuned for the next edition. If you need help with your AI tasks and implementations - let us know. We are happy to help