AI Safety Academia
AI Alignment Institutes
AI Safety Journals
AI Safety Workshop
xAI
Contra The xAI Alignment Plan
Machine Alignment Monday 7/17/23
https://astralcodexten.substack.com/p/contra-the-xai-alignment-plan

OpenAI
What could a solution to the alignment problem look like?
My currently favored approach to alignment research is to build a system that does alignment research better than us. But what would that system actually do? The obvious answer is "whatever we're doing right now." This is unsatisfactory because we're not actually trying to solve the whole alignment problem-we're just trying to build a better alignment researcher.
https://aligned.substack.com/p/alignment-solution

Bill Gates
The risks of AI are real but manageable
Bill Gates explains the risks associated with AI and argues that they are manageable. Innovations often create new risks that need to be controlled.
https://www.gatesnotes.com/The-risks-of-AI-are-real-but-manageable


Seonglae Cho