I've seen a lot news about local LLM and people's attempts to fine tuned llama 2 on HN. But surprisingly I haven't seen people to build a local version, or at least open API version of Copilot.
In fact, there's a lot of coding pair program products. Some open source fueled, some OpenAI fueled, some are doing their own weird thing years before Copilot was released.
Maybe I'm misunderstanding your question. Off the top of my head, there is codellama and starcoder, I'm sure I've seen various other ones. These are downloadable coding models that you can run on your compute. Did you mean something else?
What's stopping them is the millions of dollars in compute costs that it takes to train the model, as well as the expense of developing the interface with the IDE to make Copilot perform as well as it does.
In fact, there's a lot of coding pair program products. Some open source fueled, some OpenAI fueled, some are doing their own weird thing years before Copilot was released.