Rosano / Journal

7 entries for Tuesday, January 6, 2026

I guess I was wrong about AI persuasion

“The best diplomat in history” wouldn’t just be capable of spinning particularly compelling prose; it would be everywhere all the time, spending years in patient, sensitive, non-transactional relationship-building with everyone at once. It would bump into you in whatever online subcommunity you hang out in. It would get to know people in your circle. It would be the YouTube creator who happens to cater to your exact tastes. And then it would leverage all of that.

[We can be convinced of a lot. But it doesn’t happen because of snarky comments on social media or because some stranger whispers the right words in our ears. The formula seems to be:

  1. repeated interactions over time
  2. with a community of people
  3. that we trust

You can try to like stuff

When I encountered spinach as an adult, instead of tasting a vegetable, I tasted a grueling battle of will. Spinach was dangerous—if I liked it, that would teach my parents that they were right to control my diet.

On planes, the captain will often invite you to, “sit back and enjoy the ride”. This is confusing. Enjoy the ride? Enjoy being trapped in a pressurized tube and jostled by all the passengers lining up to relieve themselves because your company decided to cram in a few more seats instead of having an adequate number of toilets? Aren’t flights supposed to be endured?

Unit

Unit is a general purpose visual programming system built for the future of interactivity

Polarized Words

Enter 2 or more words to see their relative distances to the concepts of "good" and "evil".

based on language model embeddings which capture the semantics associated with the words in humanity's collective consciousness.

Confessions to a data lake

visual interfaces of our tools should faithfully represent the way the underlying technology works: if a chat interface shows a private conversation between two people, it should actually be a private conversation between two people, rather than a “group chat” with unknown parties underneath the interface.

We are using LLMs for the kind of unfiltered thinking that we might do in a private journal – except this journal is an API endpoint. An API endpoint to a data lake specifically designed for extracting meaning and context. We are shown a conversational interface with an assistant, but if it were an honest representation, it would be a group chat with all the OpenAI executives and employees, their business partners / service providers, the hackers who will compromise that plaintext data, the future advertisers who will almost certainly emerge, and the lawyers and governments who will subpoena access.

When you work through a problem with an AI assistant, you’re not just revealing information - you’re revealing how you think. Your reasoning patterns. Your uncertainties. The things you’re curious about but don’t know. The gaps in your knowledge. The shape of your mental model.

When advertising comes to AI assistants, they will slowly become oriented around convincing us of something (to buy something, to join something, to identify with something), but they will be armed with total knowledge of your context, your concerns, your hesitations. It will be as if a third party pays your therapist to convince you of something.

Puppy Wisdom, if we can hear it.

[When a baby dog bites, it can be painful but also totally normal. Why can knowing this give me so much patience towards an animal, yet I take it so personally when my partner does something which hurts? Getting hurt and processing it together can also be a normal part of relationships, and you can't have one without the other.]

projects solve your own need whereas products are the intersection of other people's needs, your capacity to build a solution, and their means to compensate you for it.