Yes, but training is the most expensive part of ML, for example GPT-3 is estimated to cost something like 1-4 million USD.
With ANN you can do it one time and then clone the result for negligible energy cost.
Maybe training a batch of PNNs in parallel could save some of the energy cost, but I don't know how feasible that is considering they could behave slightly differently during training causing divergence... Now that sarcastic comment at the bottom of this thread is starting to sound relevant "Schools".
> Yes, but training is the most expensive part of ML, for example GPT-3 is estimated to cost something like 1-4 million USD.
That entirely depends on how many inferences the model will perform during its lifecycle. You can find different estimates for the energy consumption of ChatGPT, but they range from something like 500-1000 MWh a day. Assuming an electricity price of $0.165 per kWh, that would put you at roughly $80,000 to a $160,000 a day.
Even at the lower end of $80,000 a day, you'll reach your $4 Million in just 50 days.
That's not true for the most well-known models. For example Meta's LLAMA training and architecture was predicated on the observation that training cost is a drop in the well compared to the inference cost for a model's lifetime.
With ANN you can do it one time and then clone the result for negligible energy cost.
Maybe training a batch of PNNs in parallel could save some of the energy cost, but I don't know how feasible that is considering they could behave slightly differently during training causing divergence... Now that sarcastic comment at the bottom of this thread is starting to sound relevant "Schools".