Just did a funding round. In a sign of the times clickhouse used to be an interesting DB product, but is now a "database software that companies can use as they develop AI agents "
<i>Database technology startup ClickHouse Inc. has raised $400 million in a new funding round that values the company at $15 billion — more than double its valuation less than a year ago. </i>
Investors are finicky creatures, if you've been relying on VC-funding since before, it's hard to stop until you are really successful, and if everyone starts to only look at shiny AI stuff and you still need investors, you end up with not much choice.
I wish there was less of it, we'd have better software then, but :/
Yeah, like FOSS which is drastically underfunded since birth, yet continues to put out software that the entire world ends up relying on, instead of relying on whatever VC-pumped companies are putting out.
I'm not talking "better software" as in "made a lot of money", I meant "better" as in "had a better impact on the world".
FOSS software that many rely on that has been around for a while were non-VC: VCS, Linux / GNU / BSD, web browsers, various programming languages, various databases...
Sure, those projects were un(der)funded in the 80s and 90s but the reason we talk about them today is because of the huge amount of investment - both direct and in kind - that VC backed companies have managed to give to many of them.
I think it’s easy to forget how long ago it was when FOSS truly was the outsider and wouldn’t be touched by most companies.
Mozilla/Firefox started in 1998 and then started taking ad revenue from Google in 2005, which pays for a large chunk of its development. It’s been part of the Silicon Valley money machine for 20 years, most of its existence.
I sometimes wonder if the VC ecosystem creates its own confirmation bias by making it easy to see and aggregate companies it incubates. Whenever I look for jobs, I'm always surprised to find companies that have taken no VC funding and don't try particularly hard to market to the industry as a whole, preferring instead to stay relatively under the radar.
They tend to have more grounded financials (read: paths to profitability) and while the pay packages aren't quite aligned with the top end of the market, they also tend to manage headcount more responsibly than FAANG. I work with a fairly niche stack and I'm constantly finding new companies that I've never heard of and don't raise VC rounds.
Long way of saying that just because they're not easy to find doesn't mean they don't exist.
I think it’s hard to make money as a pure play DB vendor and has been for a decade or two. So they all inevitably pivot into some service specific to whatever the hot use case of the moment is… Cybersecurity. Observability. Crypto. AI.
Note that the headline is from Langfuse, not ClickHouse. Reading the announcement from ClickHouse[0], the headline is "ClickHouse welcomes Langfuse: The future of open-source LLM observability". I think the Langfuse team is suggesting that they will be continuing to do the same work within ClickHouse, not that the entire ClickHouse organization has a goal of building the best LLM engineering platform.
Your notes aren't very good. They're not a time series database company, they're a columnar database company. But yeah the LLM bit is weird, database companies _always_ feel like charlatans when it comes to LLMs.
Could you elaborate? because that sentence made my brow wrinkle with confusion. I have thought to myself before that all business data problems eventually become time series problems. I'd like to understand your point of view on how LLMs fit into that.
Time series just means that the order of features matter. Feature 1 occurs before feature 2.
E.g, fitting a model to house prices, you don’t care if feature 1 is square meters and feature 2 is time on market, or vice versa, but in a time series, your model changes if you reverse the order of features.
With text, the meaning of word 2 is dependent on the meaning of word 1. With stock prices, you expect the price at time 2 to be dependent on time 1.
Text can be modeled as a time series.
A language model tells you the next character/token/word depending on the previous input.
Language models are time series.
It’s not an audacious claim.
Any student of nlp should have met a paper modeling text as time series before writing their thesis. How could you not meet that?
"Berkshire Hathaway Inc. is an American multinational conglomerate holding company" is a weird thing for a textile manufacturer to call itself. Almost like...businesses expand and evolve?
(they've never been a time series database company either lol)
I'm surprised it's not mentioned yet, but this seems to compliment last year's acquisition of observability tool HyperDX[1] (part of ClickStack[2]) quite well. I'm in the market for a new o11y platform and it seems all vendors are working to add LLM observability one way or the other, if they haven't added it already.
As a big Clickhouse fan, agent evals are where their product really shines. They're buying into market segment where their product is succeeding so they can vertically integrate and tighten up the feedback loop.
This is part of a bigger consolidation trend, AI hype or not: which general-purpose data vendor gets to store and query all of your observability and business data?
Snowflake acquired Observe last week, AWS made it easy in December to put logs from Cloudwatch in their managed iceberg catalog, and Azure is doing a bunch of interesting stuff with Fabric.
The line between your data lake/analytics vendor and observability vendor is getting blurry.
It seems like an expansion play from their team and their end vision as both a platform (clickhouse + postgres) and product (observability) seems to be pretty good combo that fits hand in hand.
For those building applications with Langfuse and Clickhouse - do you like these products? I get the odd request to do an AI thing, and my previous experience with LLM wrappers convinced me to stay away from them (Langchain, Llamaindex, Autogen, others). In some cases they were poorly written, and in other ways the march of progress rendered their tooling irrelevant fairly quickly. Are these better?
Ive used Langfuse. It's completely unrelated to tools like Langchain and Autogen. It's just logging/tracing for LLMs. Sure they added stuff like "prompt management" and "epxeriments" etc. probably to keep investors happy but those are entirely optional sidedishes.
The tools you mentioned are indeed to be avoided. I trialed them early on and quickly realized in 99.9% they do nothing but bog you down. Pretty sure they'll be dead sooner rather than later.
The observability stuff can be nice for deployments but really, these libraries/frameworks don't actually do much more than provide some structure, which unless you're expecting a team with high turnover to maintain it, doesn't really matter all that much, especially if you're an experienced developer, you'll find better design/architectures fitting for your use case without them.
Hm I find this very much a "please reinvent the wheel" take.
These frameworks provide structure for established patterns,but they also actually do a lot that you don't have to do anymore. If you are for example building an agentic application then these kind of frameworks make it very simple to create the workflows, do the chat with the model providers, provide structure for agentic skills, decision making and the human in the loop, etc. etc.
All stuff that I would consider "low level". All things you don't have to build.
If you have an aversion to frameworks then sure - by all means. But if you like to move faster and using good building blocks then these frameworks really help.
One thing to keep in mind - many of these AI frameworks are open source and work really well without needing backend services. Or you can self host them where needed. But for many that is also the premium model, please use and pay for our backend services. But that is also a choice of course.
> All stuff that I would consider "low level". All things you don't have to build.
But those are also very trivial to build, and you end up having to customize them for your need, and if the framework don't have those levers, better be prepared to either fork the framework, or spend time contributing upstream.
Or, start simple yourself with what you need, use libraries for the hairy parts you don't want to be responsible for the implementation of, then pipe these things together. You'll get a less compromised experience, and you'll understand 100% how everything works, which is the part people generally try to avoid and that's why they're reaching for frameworks.
> But if you like to move faster and using good building blocks then these frameworks really help.
I find that they help a lot with the "move faster" part in the beginning, but after that period, they slow you down instead. But I'm also a person that favors "slow software design and development" where you take your time to nail down a good design/architecture before you run. Slow is fast, and avoiding hairballs is the most important part if you're aiming for "move fast for longer" rather than "a sprint of fast".
Without the purchase price, it is unclear whether this deserves congratulations or condolences.
Two years in the LLM race will have definitely depleted their seed raise of $4m from 2023, and with no news of additional funds raised it's more than likely this was a fire sale.
It was not a fire sale I'm pretty sure. Langfuse has been consistently growing, they publish some stats about sdk usage etc so you can look that up.
They also say in the announcement that they had a term sheet for a good series a.
I think the team just took the chance to exit early before the llm hype crashes down. There is also a question of how big this market really is they mostly do observability for chatbots but there are only so many of those and with other players like openais tracing, pydantic logfire, posthog etc they become more a feature than a product of its own. Without a great distribution system they would eventually fall behind I think.
2 years to a decent exit (probably 100m cash out or so with a good chunk being Clickhouse shares) seems like a good idea rather than betting on that story to continue forever.
I don’t know about that. I looked at them a couple of months back for prompt management and they were pretty behind in terms of features. Went with PromptLayer
Agreed, Sentry, Posthog, and many more are all doing the exact same thing now, I'd be surprised if this was a good deal for Langfuse. I personally migrated away from it to use Sentry, their software was honestly not that great.
Anecdotally, from the AI startup scene in London, I do not know folks who swear by Langfuse. Honestly, evals platforms are still only just starting to catch on. I haven't used any tracing/monitoring tools for LLMs that made me feel like, say, Honeycomb does.
I'd say out of many generative AI observability platforms, Langsmith and Weave (Weights&Biases) are probably the ones most enterprises use, but there's definitely space for Langfuse, Modelmetry, Arize AI, and other players.
I predict it will be Pydantic next to get picked up by someone for logfire and agent framework.... fine as long as all these open source projects stay open source then good for them
SaaS company pivots to AI. Gets funding rebranded as AI company. Buys a company that actually knows it.
It’s still early but I question how much of these SaaS companies will continue. I’d rather connect Claude or whatever to do my task than have to learn a new platform let alone login to it.
I don’t think that is a an accurate depiction of ClickHouse. I don’t think they’re pivoting from their main data warehousing product at all. Probably making their cloud offering more competitive with other providers.
I haven’t used their product so you’re probably right. I’m biased as an AI engineer because I get contacted to help implement AI in existing platforms. While I admire the pivot the reality is what they have is already quite behind. Anything I make these days is old in about three months… You’d ideally want to start fresh and not have to worry about codebase that is years old.
I do understand why it’s a product - it feels a bit like what databricks has with model artifacts. Ie having a repo of prompts so you can track performance changes against is good. Especially if say you have users other than engineers touching them (ie product manager wants to AB).
Having said that, I struggled a lot with actually implementing langfuse due to numerous bugs/confusing AI driven documentation. So I’m amazed that it’s being bought to be really frank. I was just on the free version in order to look at it and make a broader recommendation, I wasn’t particularly impressed. Mileage may vary though, perhaps it’s a me issue.
I thought the docs were pretty good just going through them to see what the product was. For me I just don't see the use-case but I'm not well versed in their industry.
I think the docs are great to read, but implementing was a completely different story for me, ie, the Ask AI recommended solution for implementing Claude just didn’t work for me.
They do have GitHub discussions where you can raise things, but I also encountered some issues with installation that just made me want to roll the dice on another provider.
They do have a new release coming in a few weeks so I’ll try it again then for sure.
Edit: I think I’m coming across as negative and do want to recommend that it is worth trying out langfuse for sure if you’re looking at observability!
Iterating on LLM agents involves testing on production(-like) data. The most accurate way to see whether your agent is performing well is to see it working on production.
You want to see the best results you can get from a prompt, so you use features like prompt management an A/B testing to see what version of your prompt performs better (i.e. is fit to the model you are using) on production.
We use it for our internal doc analysis tool. We can easily extract production genrrations, save them to datasets and test edge cases.
Also, it allows prompt separation in folders. With this, we have a pipeline for doc abalysis where we have default prompts and the user can set custom prompts for a part of of the pipeline. Execution checks for a user prompt before inference, if not, uses default prompt, which is already cached on code. We plan to evaluate user prompts to see which may perform better and use them to improve default prompt.
Correct! Will be moving away immediately for this reason.
Or well, technically incorrect, as someone will surely point out. US companies can be legally compliant with GDPR, it's just that the likes of the CLOUD Act and FISA make it completely meaningless.
Before anyone comes in talking about how it's farfetched that those matter, it's 100x as far-fetched that self-hosted Chinese LLM models would exfiltrate your data (you can even airgap them) yet 90% of corporate America is avoiding them based solely on the country they were trained in. Compared to that insanity, above US acts are a very real threat.
And that's of course on top of that now an adversarial state's company has the power to immediately dissolve Langfuse.
Very sad, for all their marketing around EU, GDPR, privacy and so on. I feel dumb for having fell for it a little.
This is a big reason why there are so few EU tech startups, they get bought out if they're doing well, more and more consolidation in tech, more and more "exits".
<i>Database technology startup ClickHouse Inc. has raised $400 million in a new funding round that values the company at $15 billion — more than double its valuation less than a year ago. </i>
https://www.bloomberg.com/news/articles/2026-01-16/clickhous...
I wish there was less of it, we'd have better software then, but :/
Yeah, like FOSS which is drastically underfunded since birth, yet continues to put out software that the entire world ends up relying on, instead of relying on whatever VC-pumped companies are putting out.
I'm not talking "better software" as in "made a lot of money", I meant "better" as in "had a better impact on the world".
FOSS software that many rely on that has been around for a while were non-VC: VCS, Linux / GNU / BSD, web browsers, various programming languages, various databases...
I think it’s easy to forget how long ago it was when FOSS truly was the outsider and wouldn’t be touched by most companies.
Mozilla/Firefox started in 1998 and then started taking ad revenue from Google in 2005, which pays for a large chunk of its development. It’s been part of the Silicon Valley money machine for 20 years, most of its existence.
I don't know why people are so upset here.
They tend to have more grounded financials (read: paths to profitability) and while the pay packages aren't quite aligned with the top end of the market, they also tend to manage headcount more responsibly than FAANG. I work with a fairly niche stack and I'm constantly finding new companies that I've never heard of and don't raise VC rounds.
Long way of saying that just because they're not easy to find doesn't mean they don't exist.
Interesting headline for a checks notes time series database company.
[0] https://clickhouse.com/blog/clickhouse-acquires-langfuse-ope...
It’s great when you get this insight as a student of NLP, because suddenly your toolset grows quite a bit.
E.g, fitting a model to house prices, you don’t care if feature 1 is square meters and feature 2 is time on market, or vice versa, but in a time series, your model changes if you reverse the order of features.
With text, the meaning of word 2 is dependent on the meaning of word 1. With stock prices, you expect the price at time 2 to be dependent on time 1.
Text can be modeled as a time series.
A language model tells you the next character/token/word depending on the previous input.
Language models are time series.
It’s not an audacious claim.
Any student of nlp should have met a paper modeling text as time series before writing their thesis. How could you not meet that?
(they've never been a time series database company either lol)
1: https://news.ycombinator.com/item?id=44194082 2: https://clickhouse.com/use-cases/observability
Snowflake acquired Observe last week, AWS made it easy in December to put logs from Cloudwatch in their managed iceberg catalog, and Azure is doing a bunch of interesting stuff with Fabric.
The line between your data lake/analytics vendor and observability vendor is getting blurry.
It seems like an expansion play from their team and their end vision as both a platform (clickhouse + postgres) and product (observability) seems to be pretty good combo that fits hand in hand.
The tools you mentioned are indeed to be avoided. I trialed them early on and quickly realized in 99.9% they do nothing but bog you down. Pretty sure they'll be dead sooner rather than later.
These frameworks provide structure for established patterns,but they also actually do a lot that you don't have to do anymore. If you are for example building an agentic application then these kind of frameworks make it very simple to create the workflows, do the chat with the model providers, provide structure for agentic skills, decision making and the human in the loop, etc. etc.
All stuff that I would consider "low level". All things you don't have to build.
If you have an aversion to frameworks then sure - by all means. But if you like to move faster and using good building blocks then these frameworks really help.
One thing to keep in mind - many of these AI frameworks are open source and work really well without needing backend services. Or you can self host them where needed. But for many that is also the premium model, please use and pay for our backend services. But that is also a choice of course.
But those are also very trivial to build, and you end up having to customize them for your need, and if the framework don't have those levers, better be prepared to either fork the framework, or spend time contributing upstream.
Or, start simple yourself with what you need, use libraries for the hairy parts you don't want to be responsible for the implementation of, then pipe these things together. You'll get a less compromised experience, and you'll understand 100% how everything works, which is the part people generally try to avoid and that's why they're reaching for frameworks.
> But if you like to move faster and using good building blocks then these frameworks really help.
I find that they help a lot with the "move faster" part in the beginning, but after that period, they slow you down instead. But I'm also a person that favors "slow software design and development" where you take your time to nail down a good design/architecture before you run. Slow is fast, and avoiding hairballs is the most important part if you're aiming for "move fast for longer" rather than "a sprint of fast".
Two years in the LLM race will have definitely depleted their seed raise of $4m from 2023, and with no news of additional funds raised it's more than likely this was a fire sale.
They also say in the announcement that they had a term sheet for a good series a.
I think the team just took the chance to exit early before the llm hype crashes down. There is also a question of how big this market really is they mostly do observability for chatbots but there are only so many of those and with other players like openais tracing, pydantic logfire, posthog etc they become more a feature than a product of its own. Without a great distribution system they would eventually fall behind I think.
2 years to a decent exit (probably 100m cash out or so with a good chunk being Clickhouse shares) seems like a good idea rather than betting on that story to continue forever.
every single day there is an acquisition on here. what's going on in the macro?
For good or bad, I think we're pretty "SaaS/vc/etc." already.
It’s still early but I question how much of these SaaS companies will continue. I’d rather connect Claude or whatever to do my task than have to learn a new platform let alone login to it.
Having said that, I struggled a lot with actually implementing langfuse due to numerous bugs/confusing AI driven documentation. So I’m amazed that it’s being bought to be really frank. I was just on the free version in order to look at it and make a broader recommendation, I wasn’t particularly impressed. Mileage may vary though, perhaps it’s a me issue.
They do have GitHub discussions where you can raise things, but I also encountered some issues with installation that just made me want to roll the dice on another provider.
They do have a new release coming in a few weeks so I’ll try it again then for sure.
Edit: I think I’m coming across as negative and do want to recommend that it is worth trying out langfuse for sure if you’re looking at observability!
You want to see the best results you can get from a prompt, so you use features like prompt management an A/B testing to see what version of your prompt performs better (i.e. is fit to the model you are using) on production.
Or well, technically incorrect, as someone will surely point out. US companies can be legally compliant with GDPR, it's just that the likes of the CLOUD Act and FISA make it completely meaningless.
Before anyone comes in talking about how it's farfetched that those matter, it's 100x as far-fetched that self-hosted Chinese LLM models would exfiltrate your data (you can even airgap them) yet 90% of corporate America is avoiding them based solely on the country they were trained in. Compared to that insanity, above US acts are a very real threat.
And that's of course on top of that now an adversarial state's company has the power to immediately dissolve Langfuse.
https://clickhouse.com/blog/clickhouse-raises-400-million-se...
This is a big reason why there are so few EU tech startups, they get bought out if they're doing well, more and more consolidation in tech, more and more "exits".