I’ve been on record, and I believe Claude by Anthropic is a much better product than Chat GPT with Open AI. The concerns that arise from AI are indeed crucial and need careful consideration regarding how it is made. What are the deep insecurities of humans that we are afraid to ask that AI will tell us? Perhaps more worrisome is what bad actors will do (and what will this enable for the good ones?) What biases are we imprinting in the system? Even scarier to think about, what are the blind spots we can’t see?
Every piece of technology solves problems while simultaneously creating new ones. Automobile accidents didn’t exist before automobiles, but that didn’t stop us from producing them.
Running from the tech is not an option. So what are we going to do to create the best type of AI? What do we want it to do? Which problems are we trying to solve? Who should regulate? Who can? Where are we trying to go? How do we set the guardrails? I found this discussion so fascinating and think its worth a listen.
https://podcasts.apple.com/us/podcast/tetragrammaton-with-rick-rubin/id1671669052?i=1000711127134
Remember, if you don’t have AI working for you, soon you will be working for AI.