“There are two Brendas - their job is to make spreadsheets in the Finance department. Well, not quite - they add the months and categories to empty spreadsheets, then they ask the other departments to fill in their sales numbers every month so it can be presented to management.
“The two Brendas don’t seem to talk, otherwise they would realize that they’re both asking everyone for the same information, twice. And they’re so focused on their little spreadsheet worlds that neither sees enough of the bigger picture to say, ‘Wait… couldn’t we just automate this so we don’t need to do this song and dance every month? Then we wouldn’t need two people in different parts of the company compiling the same data manually.’
“But that’s not what Brenda was hired for. She’s a spreadsheet person, not a process fixer. She just makes the spreadsheets.”
We need fewer Brendas, and more people who can automate away the need for them.
And then you end up with a team of five people each tree times as expensive as Brenda, and what used to be an email now takes a sprint and has to go through ticket system.
That's a pretty specific example when there are a lot of good "spreadsheet people" out there who do a lot more than spreadsheets (maybe they had to write SQL queries or scripts to get those numbers), but commonly need to simplify things down to a spreadsheet or power point for upper management. I'm not saying you should have multiple people doing redundant work, but this style isn't entirely dumb.
What would this be replaced by? Some kind of large SAP like system that costs millions of dollars and requires a dozen IT staff to maintain?
Fair - I was creating a straw man mostly to make a point. The people I’m thinking aren’t running SQL queries or scripts, they’re merely collection points for data.
So one good BI developer who knows Tableau and Salesforce and Excel and SQL can replace those pure collection points with a better process, but they can also generate insight into the data because they have some business understanding from being close to the teams, which is what my hypothetical Brenda can’t do.
In my example, Brenda would be asking sales leaders to enter in their data instead of going into Salesforce herself because she doesn’t know that tool / side of the company well enough.
I was making the point that, contrary to the article, the Brendas I know aren’t touched by the Excel angels, they’re just maintaining spreadsheets that we probably shouldn’t have anyway.
Co-pilot and AI has been shoved at the Microsoft Stack in my org for months. Most of the features were disabled or hopelessly bad. It’s cheaper for Microsoft to push this junk and claim they’re doing something, it’s going to improve their stock far more than not doing it, even though it’s basically useless currently.
Another issue is that my org disallows AI transcription bots. It’s a legit security risk if you have some random process recording confidential info because the person was too busy to attend the meeting and take notes themselves. Or possibly they just shirk off the meetings and have AI sit in.
Still find the Copilot transcripts orders of magnitude worse than something like Wispr Flow and they tend to allucinate constantly and do not adapt to a company's context (that Copilot has access too...). I am talking about acronyms of products / teams, names of people (even when they are in the call), etc.
That mirrors my experience as well. LLMs get instantly confused in real world scenarios in Excel and confidently hallucinate millions in errors
If you look at the demos for these it’s always something that is clean and abundantly available in training data. Like an income statement. Or a textbook example DCF. Or my personal fav „here is some data show me insights“. Real world excel use looks nothing like that.
I’m getting some utility out of them for some corporate tasks but zilch in excel space.
I find the contrast between two narratives around technology use so fascinating:
1. We advocate automation because people like Brenda are error-prone and machines are perfect.
2. We disavow AI because people like Brenda are perfect and the machine is error-prone.
These aren't contradictions because we only advocate for automation in limited contexts: when the task is understandable, the execution is reliable, the process is observable, and the endeavour tedious. The complexity of the task isn't a factor - it's complex to generate correct machine code, but we trust compilers to do it all the time.
In a nutshell, we seem to be fine with automation if we can have a mental model of what it does and how it does it in a way that saves humans effort.
So, then - why don't people embrace AI with thinking mode as an acceptable form of automation? Can't the C-suite in this case follow its thought process and step in when it messes up?
I think people still find AI repugnant in that case. There's still a sense of "I don't know why you did this and it scares me", despite the debuggability, and it comes from the autonomy without guardrails. People want to be able to stop bad things before they happen, but with AI you often only seem to do so after the fact.
Narrow AI, AI with guardrails, AI with multiple safety redundancies - these don't elicit the same reaction. They seem to be valid, acceptable forms of automation. Perhaps that's what the ecosystem will eventually tend to, hopefully.
> So, then - why don't people embrace AI with thinking mode as an acceptable form of automation?
"Thinking" mode is not thinking, it's generating additional text that looks like someone talking to themselves. It is as devoid of intention and prone to hallucinations as the rest of LLM's output.
> Can't the C-suite in this case follow its thought process and step in when it messes up?
That sounds like manual work you'd want to delegate, not automation.
This reminds me of a friend whose company ran a daily perl script that committed every financial transaction of the day to a database. Without the script, the company could literally make no money irrespectively of sales because this database was one piece in a complex system for payment processor interoperability.
The script ran in a machine located at the corner of a cubicle and only one employee had the admin password. Nobody but a handful of people knew of the machine's existence, certainly not anyone in middle management and above. The script could only be updated by an admin.
Copilot may be good, but sure as hell doesn't know that admin password.
Everywhere I’ve ever worked has had that mission critical box.
At one of my jobs we had a server rack with UPS, etc, all the usual business. On the floor next to it was a dell desktop with a piece of paper on it that said “do not turn off”. It had our source control server in it, and the power button didn’t work. We did eventually move it to something more sensible but we had that for a long time
An old colleague and friend used to print out a 30 page perl script he wrote to do almost exactly this in this scenario. A stapled copy could always be found on his dining room table.
That sounds pretty bad. Not a great argument against AI: "Our employees have created such a bad mess that AI wont work because only they know how the mess they created works".
Well if you do it once then yes, but if you automate this process it is different. E.g. I do this with YouTube videos, because watching 14 minutes video or reading 30 seconds summary is time saver. I still watch some videos fully, but many of them are not worth it.
So in summary I think it was just part of automated process (maybe) or it will become one in the future.
Excel is the “beast that drives the ENTIRE economy” and he’s worried about Brenda from the finance department losing her job because then her boss will get bad financial reports
I suppose the person that wrote that have not ideia Excel is just an app builder where you embed data together with code.
You know that we have excel because computers didn’t understand column names in databases and so data extraction needed to be made by humans. Humans then design those little apps in excel to massage the data.
Well, now an agent can read the boss saying gimme the sales from last month and the agent don’t need excel for that, because it can query the database itself, massage the data itself using python and present the data itself with html or PNGs.
So, we are in the process of automating Brenda AND excel away.
Also, finance departments are a very small part of excel users. Just think everywhere were people need small programs, excel is there.
The post is clearly hyperbole obviously the sole issue being brought up isn't 'brenda losing her job may be bad for the company' you're being facetious.
You missed this bit “.. and then the AI is gonna fuck it up real bad and he won't be able to recognize it because he doesn't understand because AI hallucinates.”
The underlying assumption is that Brenda generally does her job pretty well. Human errors exist but usually peers/managers (or the person who did it) can identify and correct them reliably.
If we have to compare LLM’s against people who are bad at their jobs in order to highlight their utility we’re going the wrong direction.
"the sweat from Brenda's brow is what allows us to do capitalism."
The CEO has been itching to fire this person and nuke her department forever. She hasn't gotten the hint with the low pay or long hours, but now Copilot creates exactly the opening the CEO has been looking for.
Brenda has been getting slower over the years -as we all have-, but soon the boss will learn that it was a small price to pay for knowing well how to keep such house of cards from collapsing.
And then the boss will make the decision to outsource her job, to a company that promises the use of AI to make finance better, and faster, and while Brenda is in the unemployment line, someone else thousands of miles away is celebrating a new job
This is transparent nonsense. People are very very happy to introduce errors into excel spreadsheets without any help from AI.
Financial statements are correct because of auditors who check the numbers.
If you have a good audit process then errors get detected even if AI helped introduce them. If you aren't doing a good audit then I suspect nobody cares whether your financial statement is correct (anyone who did would insist on an audit).
At some point, a publicly-listed company will go bankrupt due to some catastrophic AI-induced fuck-up. This is a massive reputational risk for AI platforms, because ego-defensive behaviour guarantees that the people involved will make as much noise as they can about how it's all the AI's fault.
I don't find comments along the lines of 'those people over there are bad' to be interesting, especially when I agree with them. My comment is about why it'll go wrong for them.
I see the inverse of that happening: every critical decision will incorporate AI somehow. If the decision was good, the leadership takes credit. If something terrible happens, blame it on the AI. I think it's the part no one is saying out loud. That AI may not do a damn useful thing, but it can be a free insurance policy or surrogate to throw under the bus when SHTF.
I'm actually not that worried about this, because again I would classify this as a problem that already exists. There are already idiots in senior management who pass off bullshit and screw things up. There are natural mechanisms to cope with this, primarily in business reputation - if you're one of those idiots who does this people very quickly start just discounting what you're saying, they might not know how you're wrong, but they learn very quickly to discount what you're saying because they know you can't be trusted to self-check.
I'm not saying that this can't happen and it's not bad. Take a look at nudge theory - the UK government created an entire department and spent enormous amounts of time and money on what they thought was a free lunch - that they could just "nudge" people into doing the things they wanted. So rather than actually solving difficult problems the uk government embarked on decades of pseudo-intellectual self agrandizement. The entire basis of that decades long debacle was based on bullshit data and fake studies. We didn't need AI to fuck it up, we managed it perfectly well by ourselves.
Both are valid concerns, no need to decide. Take the USA: They are currently lead by a patently dumb president who fucks up the global economy, and at the same time they are powerful enough to do so!
For a more serious example, consider the Paperclip Problem[0] for a very smart system that destroys the world due to very dumb behaviour.
“There are two Brendas - their job is to make spreadsheets in the Finance department. Well, not quite - they add the months and categories to empty spreadsheets, then they ask the other departments to fill in their sales numbers every month so it can be presented to management.
“The two Brendas don’t seem to talk, otherwise they would realize that they’re both asking everyone for the same information, twice. And they’re so focused on their little spreadsheet worlds that neither sees enough of the bigger picture to say, ‘Wait… couldn’t we just automate this so we don’t need to do this song and dance every month? Then we wouldn’t need two people in different parts of the company compiling the same data manually.’
“But that’s not what Brenda was hired for. She’s a spreadsheet person, not a process fixer. She just makes the spreadsheets.”
We need fewer Brendas, and more people who can automate away the need for them.
Are you suggesting that Brenda should stay in her box?
True... I have an on-staff data engineer for the purpose. But not all companies (especially in the SMB space) have that luxury.
What would this be replaced by? Some kind of large SAP like system that costs millions of dollars and requires a dozen IT staff to maintain?
So one good BI developer who knows Tableau and Salesforce and Excel and SQL can replace those pure collection points with a better process, but they can also generate insight into the data because they have some business understanding from being close to the teams, which is what my hypothetical Brenda can’t do.
In my example, Brenda would be asking sales leaders to enter in their data instead of going into Salesforce herself because she doesn’t know that tool / side of the company well enough.
I was making the point that, contrary to the article, the Brendas I know aren’t touched by the Excel angels, they’re just maintaining spreadsheets that we probably shouldn’t have anyway.
Another issue is that my org disallows AI transcription bots. It’s a legit security risk if you have some random process recording confidential info because the person was too busy to attend the meeting and take notes themselves. Or possibly they just shirk off the meetings and have AI sit in.
If you look at the demos for these it’s always something that is clean and abundantly available in training data. Like an income statement. Or a textbook example DCF. Or my personal fav „here is some data show me insights“. Real world excel use looks nothing like that.
I’m getting some utility out of them for some corporate tasks but zilch in excel space.
1. We advocate automation because people like Brenda are error-prone and machines are perfect.
2. We disavow AI because people like Brenda are perfect and the machine is error-prone.
These aren't contradictions because we only advocate for automation in limited contexts: when the task is understandable, the execution is reliable, the process is observable, and the endeavour tedious. The complexity of the task isn't a factor - it's complex to generate correct machine code, but we trust compilers to do it all the time.
In a nutshell, we seem to be fine with automation if we can have a mental model of what it does and how it does it in a way that saves humans effort.
So, then - why don't people embrace AI with thinking mode as an acceptable form of automation? Can't the C-suite in this case follow its thought process and step in when it messes up?
I think people still find AI repugnant in that case. There's still a sense of "I don't know why you did this and it scares me", despite the debuggability, and it comes from the autonomy without guardrails. People want to be able to stop bad things before they happen, but with AI you often only seem to do so after the fact.
Narrow AI, AI with guardrails, AI with multiple safety redundancies - these don't elicit the same reaction. They seem to be valid, acceptable forms of automation. Perhaps that's what the ecosystem will eventually tend to, hopefully.
No, no. We disavow AI because our great leaders inexplicably trust it more than Brenda.
"Thinking" mode is not thinking, it's generating additional text that looks like someone talking to themselves. It is as devoid of intention and prone to hallucinations as the rest of LLM's output.
> Can't the C-suite in this case follow its thought process and step in when it messes up?
That sounds like manual work you'd want to delegate, not automation.
The script ran in a machine located at the corner of a cubicle and only one employee had the admin password. Nobody but a handful of people knew of the machine's existence, certainly not anyone in middle management and above. The script could only be updated by an admin.
Copilot may be good, but sure as hell doesn't know that admin password.
At one of my jobs we had a server rack with UPS, etc, all the usual business. On the floor next to it was a dell desktop with a piece of paper on it that said “do not turn off”. It had our source control server in it, and the power button didn’t work. We did eventually move it to something more sensible but we had that for a long time
Yes, most situations are terrible and would do better if an expert was present to perfect it.
(I pulled the quote by using yt-dlp to grab the MP4 and then running that through MacWhisper to generate a transcript.)
So in summary I think it was just part of automated process (maybe) or it will become one in the future.
Is MacWhisper a $60 GUI for a Python script that just runs the model?
Yes, a large genre of MacOS apps are "Native GUI wrappers around OSS scripts"
I suppose the person that wrote that have not ideia Excel is just an app builder where you embed data together with code.
You know that we have excel because computers didn’t understand column names in databases and so data extraction needed to be made by humans. Humans then design those little apps in excel to massage the data.
Well, now an agent can read the boss saying gimme the sales from last month and the agent don’t need excel for that, because it can query the database itself, massage the data itself using python and present the data itself with html or PNGs.
So, we are in the process of automating Brenda AND excel away.
Also, finance departments are a very small part of excel users. Just think everywhere were people need small programs, excel is there.
If we have to compare LLM’s against people who are bad at their jobs in order to highlight their utility we’re going the wrong direction.
The CEO has been itching to fire this person and nuke her department forever. She hasn't gotten the hint with the low pay or long hours, but now Copilot creates exactly the opening the CEO has been looking for.
Financial statements are correct because of auditors who check the numbers.
If you have a good audit process then errors get detected even if AI helped introduce them. If you aren't doing a good audit then I suspect nobody cares whether your financial statement is correct (anyone who did would insist on an audit).
I'm not saying that this can't happen and it's not bad. Take a look at nudge theory - the UK government created an entire department and spent enormous amounts of time and money on what they thought was a free lunch - that they could just "nudge" people into doing the things they wanted. So rather than actually solving difficult problems the uk government embarked on decades of pseudo-intellectual self agrandizement. The entire basis of that decades long debacle was based on bullshit data and fake studies. We didn't need AI to fuck it up, we managed it perfectly well by ourselves.
For a more serious example, consider the Paperclip Problem[0] for a very smart system that destroys the world due to very dumb behaviour.
[0]: https://cepr.org/voxeu/columns/ai-and-paperclip-problem
Are you a fanatic that thinks anyone saying that there are any limitations to current models = nay-sayer?
Like if someone says they wouldnt wanna get a heart transplant operation done purely by GPT5, are they a nay-sayer or is that just reflecting reality?