LangChain is Driving Chimpanzees Insane

LangChain is Driving Chimpanzees Insane

LangChain was often thought of as a PyTorch-level tool when it comes to integrating LLMs into programs. Though it provides a relatively easier user interface when compared to others, many developers swear against using LangChain in a production environment due to its anachronisms.

It has recently partnered with Ollama for building tool-calling agents with local models like Llama 3.1, and has also incorporated Mistral Large in its ecosystem. But people still have doubts about the usability of LangChain now.

Moreover, with smaller models such as GPT-4o mini and Llama 3.1 now available as open source, companies have been building AI agents from scratch, giving them more flexibility over building on top of abstractions such as LangChain and others.

This is because the problems with LangChain have not been resolved yet – one of the biggest being bad documentation. “It has no explanation; it just shows a single implementation example code snippet for each title,” said a user.

LlamaIndex, an alternative to LangChain, is often touted as the better abstraction tool for many users. Despite this, one can find several developers use blanket statements like “you should never use them” for abstractions. “They are tools. Flawed in many ways and excellent in others,” said Santiago.

holy shit pic.twitter.com/uXqKah0ild

— adam 🇺🇸 (@personofswag) July 23, 2024

Regarding the complaints of poor documentation, Harrison Chase, the founder of LangChain, assured that they’re working on fixing it. The strength of LangChain lies in its ability to adapt quickly while also offering a wide range of services.

Bad Timing?

Earlier, AIM had made a list of all the alternatives to LangChain for building AI agents. LangChain’s offering, though aiming for convenience, has ironically birthed a host of challenges. The intricate web it weaves has led to accusations of unnecessary complication, leaving developers questioning its true intentions.

This additional complexity offers no tangible benefits and makes the code harder to manage. Good abstractions simplify your code and reduce cognitive load, but LangChain’s approach does the opposite.

Octomind, a company that has been using LangChain since 2023, recently published a blog narrating why it recommends against using it for production. The reason it cites is that as the requirements became sophisticated, LangChain’s inflexibility became apparent, “turning LangChain into a source of friction, not productivity”.

my bad i ever touched langchain pic.twitter.com/HRJqyA6G9W

— Arararagi (@hoenogatari) July 13, 2024

One of the main reasons that LangChain is slowly losing its charm amongst developers and companies alike is that it was a very early product, which could not easily adapt to the changes of new frameworks and LLMs in the market.

“I’m sure that if I had attempted to build a framework like LangChain when they did, I wouldn’t have done any better,” said the writer of the blog.

To be fair, LangChain has been trying to catch up. When OpenAI released its GPTs alongside the Assistant API, LangChain was trying to quickly catch up, but ran into several troubles.

LangChain was beneficial initially when the needs of companies aligned with its assumptions. However, its high-level abstractions soon complicated the code, making it harder to understand and maintain. “Our team spent excessive time debugging LangChain instead of building features, signalling a problem.”

Moreover, if most of the architecture of agents is built using LangChain, shifting to a single sequential agent or something more complex, LangChain becomes a limiting factor.

“LangChain does ‌not provide a method for externally observing an agent’s state, resulting in us reducing the scope of our implementation to fit into the limited functionality available to LangChain Agents,” said Fabian Both, the author of Octomind blog.

Building LLMs Should be Straightforward

Speed and innovation are crucial to AI development, driven by experimentation and prototyping. Frameworks designed to enforce structure can limit iteration speed by imposing unnecessary abstractions.

“Presuming you’re not shipping rubbish code to production, the speed at which a team can innovate and iterate is the most important metric for success,” said Both.

Regardless, developers have stated that what NumPy and Pandas did for machine learning, LangChain has done for LLMs, greatly increasing their usability and functionality (though not as much as PyTorch). By using LangChain, developers can empower their applications by connecting them to an LLM, or leverage a large dataset by connecting an LLM to it.

What Max Woolf said about LangChain last year, applies even now. Giving the example of translating English to French, Woolf elaborated how LangChain uses about the same amount of code as the official OpenAI library, except it incorporates more object classes for not much obvious code benefit.

This makes the task of learning ML even more difficult as it brings in another step of learning LangChain before building AI tools, which should not be the case. Using modular blocks and minimal abstractions is the way forward for quick development and less friction.

Follow us on Twitter, Facebook
0 0 votes
Article Rating
Subscribe
Notify of
guest
0 comments
Oldest
New Most Voted
Inline Feedbacks
View all comments

Latest stories

You might also like...