Nowadays, I utilize GPT-4's API for nearly every problem I encounter. By inputting all the relevant information and applying different prompts, I gain a clearer understanding to make informed decisions. Despite being released less than a year ago, I'm astounded (to say the least) at how integral GPT has become to my thinking process.
Prompts that I use that significantly aid the process:
* Provide a concise definition of [specific topic or concept] and explain its key characteristics.
* List three advantages and three disadvantages of [specific technology, method, or approach].
* Explain the step-by-step process of [specific task or procedure] in a clear and logical manner.
* Compare and contrast [two different approaches, methods, or models] in terms of their strengths and limitations.
* Predict the potential impact of [emerging technology or trend] on [specific industry or domain].
* Describe the main challenges associated with [problem or issue] and propose possible solutions.
* Summarize the main findings and conclusions of [research paper or study] in three concise points.
* Create a comprehensive list of resources, including books, articles, and websites, related to [specific topic].
* Provide examples of real-world applications or use cases for [specific technology or methodology].
* Offer insights and recommendations for optimizing [specific process or system] based on industry best practices.
Imagine that GPT is just another person you communicate with. When they give you new information, how do you guard against them being possibly wrong? You verify the information by other sources.
The "quality" of the wrong information you get from GPT-4 is very different from a human who is wrong. For example, I wouldn't expect a human to give me a long list of books that don't actually exist without hesitation.
Sure, but still, if you ask GPT/person "What are the best books about teaching dogs to sit?" you'd still look up each book individually, read reviews and figure out if they really are worth the time reading, before starting to purchasing the books. And you'd find out if the book exists or not as soon as you search.
So even if the "quality" is different, the way to verify the information is the same.
Both AI and humans can be wrong, but in different ways. Humans often mess up due to bias or memory slips, while AI usually stumbles due to data gaps or misunderstood context. AI misinformation isn't 'worse,' it's just different. Understanding this helps us use AI more effectively.
Zero trust, you have to unit test, run what it gives you, tell it in a separate session a co worker gave you this solution, it doesn't work but explain why. I quite often enlist the bot in helping to prove itself right.
Googling. Just today I asked Chat-GPT (not 4) for papers or books about some topic, and it gave me five pointers. Two of them contained useful information, the rest was hallucinated.
Use the GPT4 with web browsing mode enabled or Bing Chat if you want links to real articles. Bing Chat has come a long, long ways. Impressive capabilities. Much less hallucination.
Bing chat? You mean having to use Edge, aka Chromium without any extensions? I'd sooner go to Firefox.
GPT 4 with browsing isn't quite there yet either, usually takes at least two or three attempts to not have it fail somewhere in between. Should be pretty good once they iron it out though.
Prompts that I use that significantly aid the process: