✱ Thoughts on Microsoft bringing AI to Office

This isn't your grandma's Clippy.

Extending beyond Microsoft's previous Github Copilot name, 'Copilot' is soon coming to the masses. This is, in my view, perfect branding for something like this. It keeps you and me at the center of the creative process and allows our copilot to come in and augment us when needed.

They have invested more than $10 billion in OpenAI and the news about integrating GPT-4 into the ubiquitous Office suite seems like an obvious next step after their recent Bing integration to realize a significant return on investment. Office is a paid subscription after all, and Bing is free. Products aside, are either of these companies interested in methodical, safe, and ethical technology advancements? Or are they purely motivated by greed?


Microsoft is obviously pushing to get further embedded into enterprise—beyond their existing market dominance—in order to have businesses become even more reliant on the conveniences of what AI offers to those who use these products everyday.

Microsoft:

With Copilot, you’re always in control. You decide what to keep, modify or discard. Now, you can be more creative in Word, more analytical in Excel, more expressive in PowerPoint, more productive in Outlook and more collaborative in Teams.

Everyone wants to be more creative, analytical, expressive, productive, and collaborative. How can anyone refuse an offer like that?

Check out the launch video here to get a sense of what is coming. No arrival date has been announced but I fully expect it to actually ship.

In my opinion Microsoft pushed out the new and “improved” Bing way too early (as exemplified in Ben Thompson's story about Sydney in a previous edition of the TRXL newsletter) and I think they’re moving too fast to put GPT-4 into a product that people actually use on a daily basis—Office 365—as opposed to Bing which pretty much no one uses has less than 10% 9% market share as of the time of this article.

“Move fast and break things” is a common trope in tech, but I truly wonder if breaking the human race is considered an acceptable amount of collateral damage in this system of racing to be first to summit the AI mountain. And by that, I obviously mean to monetize it in substantial ways as a tool we can't live without, in perpetuity.

Facebook and Twitter have contributed to the undermining of the US political system. YouTube, Instagram, and TikTok have figured out how to keep our attention, milk our dopamine production, and tell us and our kids what we are going to watch next. In order to compete, Microsoft is pursuing the next wave of social engineering on businesses with this move. This kind of AI implementation into the most basic and foundational tools of business will surely lead to blind acceptance of whatever the machines tell us without much thought as to whether it’s accurate or not.

Don’t believe that claim? I'd argue that we’re already trained to accept the top Google search results without digging deeper to look for alternative sources. The same goes for Wikipedia. Yelp basically tells us where to eat when we're away from home. The list goes on. Attention spans are at an all-time low. What's delayed gratification again? We can't remember.

Convenience outweighs accuracy every day of the week. After all, how we spend our time is considered our most precious resource so shortcuts are willfully taken.

The main fear of mine in regard to where this could be heading is based on the old garbage-in-garbage-out adage. Because AI is trained using existing information on the internet—without any disclosure as the to weights applied to the also undisclosed sources—it's very likely disinformation will be as prevalent as useful information. These AI-as-a-product companies employ black box development in the name of intellectual property and capitalism, and that's a problem because society has already been slowly trained over time to have blind faith in the "answers" these platforms effortlessly spit out.

This time will be no different and potentially adds up to catastrophic results in just about everything from the education of our next generations1 (AI already knows everything we used to have to learn in school and passes most all of the exams), to the next election cycle, to the public's waning trust in institutions, and so on. Sure, there are benefits too! But is it really worth it if they don't outweigh the downsides?

Why haven't there been long term studies done before rolling this out? Clearly this is all moving as fast as possible and probably not in a responsible way. If studies exist we would have heard that side of the story to help reassure us. They're figuring it out as they go along using us in an experiment without a control group.

Of course, I could be wrong about all of this. I'm definitely not an AI researcher and I'm also not a social scientist. But I don't think Big Tech has learned particularly well from even near-past outcomes wherein they performed social experimentation on the masses to drive advertising and get us to watch what they (i.e. the algorithms) decide we are going to watch next. I previously mentioned that time is considered our most important resource, but clearly it's our attention.

I’ll be treading lightly into this new frontier. These companies, for the most part, haven't had and don't continue to have our best interests at heart.

What I know in my bones is that once people get their hands on this it’s going to be as addictive as crack so I fully expect my trepidation to fall on deaf ears. Yet the product-market fit for this is obvious and it all seems inevitable. After many years of Microsoft being on the defense it’s very interesting to see them playing once again on offense.

  1. I could go on a long digression about the education system here but I won't. I'll just say that education used to be about learning how to learn, not about knowing all the answers.