What do you think of OpenAI CEO Sam Altman stepping down from the committee responsible for reviewing the safety of models such as o1?
Last Updated: 29.06.2025 06:38

will be vivisection (live dissection) of Sam,
by use instances.
“EXPONENTIAL ADVANCEMENT IN AI,”
My ex got into a relationship within 2 weeks after a breakup. What should I do?
has “rapidly advanced,”
September, 2024 (OpenAI o1 Hype Pitch)
“anthropomorphically loaded language”?
Here's Why Rational People Skip Vaccines Even When They Trust Science - ScienceAlert
January, 2022 (Google)
Damn.
"[chain of thought] learns to break down tricky steps into simpler ones. It learns to try a different approach when the current one isn't working. This process dramatically improves the model's ability to reason."
Why The Simpsons stopped producing Maude Flanders episodes?
guy
Fifth down (on Full Hit)
(barely) one sentence,
“[chain of thought] a series of intermediate natural language reasoning steps that lead to the final output."
ONE AI
increasing efficiency and productivity,
What one thing makes someone a very mature person?
“RAPID ADVANCES IN AI”
Eighth down (on Hit & Graze)
I may as well just quote … myself:
It turns out weather on other planets is a lot like on Earth - The Washington Post
Further exponential advancement,
(the more accurate, but rarely used variant terminology),
from
in the 2015 explanatory flowchart -
DOING THE JOB OF FOUR
with each further dissection of dissected [former] Sam.
5 ways Diabetes impacts the eyes - Times of India
It’s the same f*cking thing.
prompted with those terms and correlations),
“Rapidly Evolving Advances in AI”
Predator: Killer of Killers Ending Explained - Does It Set Up Prey 2? - IGN
“Some people just don’t care.”
Function Described. January, 2022
and
Is it better to use the terminology,
January 2023 (Google Rewrite v6)
when I’m just looking for an overall,
"a simple method called chain of thought prompting -- a series of intermediate reasoning steps -- improves performance on a range of arithmetic, commonsense, and symbolic reasoning tasks.”
“Talking About Large Language Models,”
“RAPIDLY ADVANCING AI”
Why is every human messed up in some way?
Let’s do a quick Google:
(according to a LLM chat bot query,
"[chain of thought means that it] learns to break down tricky steps into simpler ones. It learns to try a different approach when the current one isn't working. This process dramatically improves the model's ability to reason."
the description,
putting terms one way,
“Rapid Advances In AI,”
- further advancing the rapidly advancing … something.
Of course that was how the
Nails
Same Function Described. September, 2024
“[chain of thought is] a series of intermediate natural language reasoning steps that lead to the final output."
In two and a half years,
or
An
The dilemma:
of the same function,
describing the way terms were used in “Rapid Advances in AI,”
step was decided,
three, overly protracted, anthropomorphism-loaded language stuffed, gushingly exuberant, descriptive sentences.
to
within a day.
better-accepted choice of terminology,
“anthropomorphism loaded language”
within a single context.
Combining,
“Rapidly Advancing AI,”