ChatGPT excites people who think (I use this word with caution) that they can use GPT to do less work/impress people/advance their careers.
This ideal may be true for those who already know how to do the work they are asking GPT to do (e.g., writing a blog post), but it won’t work for learners who admire GPT output without being able to do it themselves. They will pass GPT’s work as theirs, but they will not be able to explain “their” logic or conclusions. “GPT-cheats” will get caught. Hopefully they will just be disciplined, but others will do far more damage in their assertive ignorance (a human version of hallucinating). I am reminded of the massive damage caused by Bush’s loyal-but-incompetent agents in Iraq.
In the meantime, GPT users will be busy trying to fool each other into getting paid for work that GPT has done while non-GPT users will find the entire situation frustrating.
Non-augmented humans will take hours to do what GPT can do in seconds; they will struggle to understand complex ideas and integrate them into reasonable thoughts. They will question the point of going on. But then they will be the ones to spot the errors, to suggest novel alternatives, to add value.
In the land of the blind, the one-eyed man is king.
With GPT, we will see adults losing their analytical skills. Students will not even acquire them. Average IQ will drop, as will productivity.
(The only exception will be the few people who use GPT as a “Socratic sparring partner” to push their knowledge and/or skills. They can benefit from GPT, but the vast majority will fall for an “apple of knowledge” that is rotten inside.)
My one handed conclusion is that GPT will take the jobs of anyone who uses GPT to do those jobs, let alone study for them.