Last month, I was talking about how I didn’t understand what my bluechip Fortune 250 is doing with AI. From AI for all the Corporate IT Things:
Well, it’s a good thing I don’t understand, because he’s not talking about using AI to fix IT. He wants to use “technology” to improve our “safety-ness.” Say wha..? Like, he wants to use AI to improve safety on the factory floor. Huh?! Are we going to buy Tesla robots to pull people’s fingers out of the way of presses?! I’m confused.
I sat in a Zoom call where someone discussed the first pilot program for our official corporate AI efforts. On the one hand, they’ve done exactly what they said they were going to do. They’re trying to use AI to try to reduce OSHA incidents. Surely that’s a noble effort, right? But on the other hand, I have trouble imagining a real-world scenario that would be less applicable to AI. I mean, first of all, safety incidents are already scrutinized with a microscope. Second of all, there are so relatively few, I don’t believe you can use AI to analyze them. There’s not enough data to establish patterns. On top of that, every incident is an outlier, and gets dealt with immediately, and not in a performative way, but, like, for real. New rules are put in place, guard rails are installed, etc. So these outliers are very, very unlikely to happen again. Ergo, the data is not statistically significant, and whatever else you know about AI, it’s ALL based on statistics. So I don’t get it.
The other thing that strikes me is that we’re using — er, “renting,” I’m quite certain, and at an exorbitant rate — an off-the-shelf AI product called GenAI by Palantir. You know, the love child of the so-called Five Eyes multinational intelligence conglomerate, and the company that spies on everyone, everywhere, all of the time. So we’re not using our company’s vast resources to invest in creating our own AI models. We’re just paying our contractors to learn how to operate someone else’s machine. In this golden age where instructions on how to create models are readily accessible, and the coding libraries to implement them proliferate, we’re eschewing the opportunity to create custom models that could help our specific business problems.
Over a year ago, I talked with people about what I think we could do with AI, but I didn’t get anywhere. In the past months, several other engineers have spoken to me about similar ideas. In the part of the company I inhabit, there is a glaringly obvious use for AI staring us in the face. The problem is that we don’t have all the data we need to make it work, and getting the owners of the systems we would need to tie together with our data to open up their databases to us is simply impossible from where we sit. That sort of thing is simply never going to happen without a strong, direct proclamation from the CEO, and, even then, getting those people to give up some of their “power” in the company so that someone else can have more is going to be fought up and down the org chart. So we seem stuck. The only things we can use AI for won’t matter, and the things that would make a difference will never be done.