10 things to watch out for with open source gen AI

It seems anyone can make an AI model these days. Even if you don’t have the training data or programming chops, you can take your favorite open source model, tweak it, and release it under a new name.

According to Stanford’s AI Index Report, released in April, 149 foundation models were released in 2023, two-thirds of them open source. And there are an insane number of variants. Hugging Face currently tracks more than 80,000 LLMs for text generation alone and fortunately has a leaderboard that lets you quickly sort the models by how they score on various benchmarks. And these models, though they lag behind the big commercial ones, are improving quickly.

Leaderboards are a good place to start when looking at open source gen AI, says David Guarrera, generative AI lead at EY Americas, and Hugging Face in particular has done a good job benchmarking, he says.

“But don’t underestimate the value of getting in there and playing with these models,” he says. “Because they’re open source, it’s easy to do that and swap them out.” And the performance gap between open source models and their closed, commercial alternatives is narrowing, he adds.

“Open source is great,” adds Val Marchevsky, head of engineering at Uber Freight. “I find open source extremely valuable.” Not only are they catching up to proprietary models in performance, but some offer levels of transparency that closed source can’t match, he says. “Some open source models allow you to see what’s used for inference and what’s not,” he adds. “Auditability is important for preventing hallucinations.”

Plus, of course, there’s the price advantage. “If you have a data center that happens to have capacity, why pay someone else?” he says.

Read full article at CIO magazine.