There’s a lot of buzz right now about how generative AIs like ChatGPT and Bard will revolutionize various aspects of the web, but companies targeting narrower industries are already having success. author is one such person, and it just announced a new trio of major language models to power its enterprise copy assistant.
The company lets customers refine these models based on their own content and style guides, from which the AI can write, help write, or copy to meet internal standards. More than just spotting typos and recommending the preferred word, Writer’s new models can evaluate style and write content themselves, even doing a little fact-checking when they’re done.
But the real appeal is that it can all be done in-house, from fine-tuning to hosting, at least when it comes to the smaller two Palmyra-series models.
“No business leader wants their data to be fodder for someone else’s base model, including ours,” CEO May Habib said in a press release. “We offer customers all the benefits of the AI application layer without the risks of other AI applications and commercial models. Business leaders want to invest in solutions that essentially earn them their own LLM.”
Palymra comes in three sizes: 128 million, 5 billion or 20 billion parameters respectively Small, base, and Large. They are trained in business and marketing writing, not Reddit posts and Project Gutenberg, so there are fewer surprises to begin with. Then you load his mouth full of the last decade’s worth of annual reports, financials, blog posts, and so on to make it yours. (This and any derived data do not filter back to Writer, just to be clear.)
Having written my share of business and marketing text, I can say that this is not the most exciting application. But what it lacks in excitement it makes up for in practice: Companies have to do a lot of this kind of writing and editing, and tend to actually pay for it. Writer already leverages many development and productivity suites, so not much friction has been added.
The business model is similar to other generative AI companies: you get everything set up and tuned for free, then pay a cent per thousand tokens, which gets you about 750 words. (This article is just over 500, as a quick reference.)
Alternatively, you can hose down the Small or Base models yourself for free if you have the computing power.
A few dozen companies have been using the models since late last year, and we haven’t heard of any serious issues like the first day of Microsoft and Google’s efforts to popularize generative AI…so that’s a good sign. This is the success I spoke of earlier. While ChatGPT is certainly impressive, as a generalist or dilettante AI it’s hard to say what it can actually be used for. The next two years will see more focused plays like Writers as Microsoft and Google kick the tires on their latest toy.