if ai-generated open-source code is incomprehensible to review, does that effectively make it closed-source?
if ai-generated open-source code is incomprehensible to review, does that effectively make it closed-source?
Stage 1: Zero or Near-Zero AI: maybe code completions, sometimes ask Chat questions
Stage 2: Coding agent in IDE, permissions turned on. A narrow coding agent in a sidebar asks your permission to run tools.
Stage 3: Agent in IDE, YOLO mode: Trust goes up. You turn off permissions, agent gets wider.
Stage 4: In IDE, wide agent: Your agent gradually grows to fill the screen. Code is just for diffs.
Stage 5: CLI, single agent. YOLO. Diffs scroll by. You may or may not look at them.
Stage 6: CLI, multi-agent, YOLO. You regularly use 3 to 5 parallel instances. You are very fast.
Stage 7: 10+ agents, hand-managed. You are starting to push the limits of hand-management.
Stage 8: Building your own orchestrator. You are on the frontier, automating your workflow.
Reagan proved you could use TV aesthetics in governance. Trump is proving you cannot replace governance with TV.
[the world is not given by parents, but borrowed from children.]
Who profits from a world without cash?
[Transitioning to a cashless society implies moving away from state-issued money towards 'tokens' issued by private corporation. We currently trust their casino chips because it can be redeemed for fiat currency, but this is no longer possible when there is no cash.]
This post is an announcement for those who were unaware, an explanation for those who are confused, and a record so I don’t forget.
Italian voice actor Carlo Bononi was the voice of all of the characters on Pingu. A trained clown by trade, he used a theater technique called grammelot, which consists of "speaking" in a mix of babbled gibberish noises. He improvised all the voices live and unscripted.
what we make with a tool does not belong to the tool. A manuscript doesn’t stay inside the typewriter, a photo doesn’t stay inside the camera, and a song doesn’t stay in the microphone.
I see every one's project, and purpose, to be connected to all others as a piece of a grand puzzle. And my job in the last 2 years has been looking at each person, and finding where they fit, and when it works, they thrive, and the world thrives. Because the world needed them, and they needed it
and now I'm trying to see if I can create a container where people understand this is what I'm doing, how I'm doing it, and for others to help me do it
Skip pages, read anything, don't read that, read anywhere, don't finish, repeat, read aloud…
If you don’t have access to a dentist in your trust network, but you trust me, you can “borrow” my connection here.
if someone with resources wants to give you money, you should say no if it’s clear to you it will make your life worse, even if it’s not clear to them. Don’t let their (bad) judgement override your clarity.
LLMs are coherence engines, not truth engines
[LLMs generate coherence more than truth, with] no access to the world, no sensory grounding, no lived experience, and no intrinsic way to check correspondence between its outputs and reality.
[The same is true of humans, as we] construct narratives, causal explanations, identities, and moral frameworks that hang together, rather than ones that are objectively correct. [We tend towards] narrative consistency, social acceptability and reinforce biases based on beliefs.
science works because it builds institutional scaffolding that forces grounding through measurement, replication, falsification, and peer review. Without grounding, both humans and LLMs drift into elegant nonsense.
The risk with LLMs is not that they lie, but that they speak with fluent confidence in domains where humans already confuse coherence with truth.
I guess I was wrong about AI persuasion
“The best diplomat in history” wouldn’t just be capable of spinning particularly compelling prose; it would be everywhere all the time, spending years in patient, sensitive, non-transactional relationship-building with everyone at once. It would bump into you in whatever online subcommunity you hang out in. It would get to know people in your circle. It would be the YouTube creator who happens to cater to your exact tastes. And then it would leverage all of that.
We can be convinced of a lot. But it doesn’t happen because of snarky comments on social media or because some stranger whispers the right words in our ears. The formula seems to be:
- repeated interactions over time
- with a community of people
- that we trust
When I encountered spinach as an adult, instead of tasting a vegetable, I tasted a grueling battle of will. Spinach was dangerous—if I liked it, that would teach my parents that they were right to control my diet.
On planes, the captain will often invite you to, “sit back and enjoy the ride”. This is confusing. Enjoy the ride? Enjoy being trapped in a pressurized tube and jostled by all the passengers lining up to relieve themselves because your company decided to cram in a few more seats instead of having an adequate number of toilets? Aren’t flights supposed to be endured?
Unit is a general purpose visual programming system built for the future of interactivity
Enter 2 or more words to see their relative distances to the concepts of "good" and "evil".
based on language model embeddings which capture the semantics associated with the words in humanity's collective consciousness.
visual interfaces of our tools should faithfully represent the way the underlying technology works: if a chat interface shows a private conversation between two people, it should actually be a private conversation between two people, rather than a “group chat” with unknown parties underneath the interface.
We are using LLMs for the kind of unfiltered thinking that we might do in a private journal – except this journal is an API endpoint. An API endpoint to a data lake specifically designed for extracting meaning and context. We are shown a conversational interface with an assistant, but if it were an honest representation, it would be a group chat with all the OpenAI executives and employees, their business partners / service providers, the hackers who will compromise that plaintext data, the future advertisers who will almost certainly emerge, and the lawyers and governments who will subpoena access.
When you work through a problem with an AI assistant, you’re not just revealing information - you’re revealing how you think. Your reasoning patterns. Your uncertainties. The things you’re curious about but don’t know. The gaps in your knowledge. The shape of your mental model.
When advertising comes to AI assistants, they will slowly become oriented around convincing us of something (to buy something, to join something, to identify with something), but they will be armed with total knowledge of your context, your concerns, your hesitations. It will be as if a third party pays your therapist to convince you of something.
Puppy Wisdom, if we can hear it.
[When a baby dog bites, it can be painful but also totally normal. Why can knowing this give me so much patience towards an animal, yet I take it so personally when my partner does something which hurts? Getting hurt and processing it together can also be a normal part of relationships, and you can't have one without the other.]