Google's TPU v8, Tesla's Own Fab, and Interviews Get Weird
May 14, 2026·1 min read
Three things from today's [TLDR](https://tldr.tech/tech/2026-04-23) caught my eye, and they all point in the same direction: everyone wants to own their silicon and rethink the human bits around it. Google's TPU v8 is the headline, but the
Three things from today's [TLDR](https://tldr.tech/tech/2026-04-23) caught my eye, and they all point in the same direction: everyone wants to own their silicon and rethink the human bits around it.
Google's TPU v8 is the headline, but the real story is how aggressively Google is pulling away from Nvidia for internal workloads. Each TPU generation has narrowed the gap on training and widened it on inference cost-per-token. If you're building on Gemini or Vertex, this is why your bills keep getting quietly cheaper while OpenAI's don't.
Tesla spinning up a research fab is the wilder bet. Fabs are brutal — ask Intel. But Tesla doesn't need TSMC-class yields, it needs custom Dojo and FSD chips on its own timeline. This is the same playbook Apple ran with silicon, except Tesla is starting from zero on the manufacturing side. Risky, but the upside is a vertically integrated AI compute stack nobody else has.
Then there's AI-native interviews — candidates using copilots live, companies pretending to be okay with it, everyone secretly confused. The old leetcode ritual is dead and nobody has agreed on what replaces it. Take-homes get gamed, live coding gets gamed, system design is next.
My take: the hardware story is predictable consolidation, but hiring is the actual mess. Figure out how you'd interview someone today before your next req opens — because the candidates already have.