Rosano / Journal

11 entries for 2026

Tuesday, January 6, 2026

I guess I was wrong about AI persuasion

“The best diplomat in history” wouldn’t just be capable of spinning particularly compelling prose; it would be everywhere all the time, spending years in patient, sensitive, non-transactional relationship-building with everyone at once. It would bump into you in whatever online subcommunity you hang out in. It would get to know people in your circle. It would be the YouTube creator who happens to cater to your exact tastes. And then it would leverage all of that.

[We can be convinced of a lot. But it doesn’t happen because of snarky comments on social media or because some stranger whispers the right words in our ears. The formula seems to be:

  1. repeated interactions over time
  2. with a community of people
  3. that we trust

You can try to like stuff

When I encountered spinach as an adult, instead of tasting a vegetable, I tasted a grueling battle of will. Spinach was dangerous—if I liked it, that would teach my parents that they were right to control my diet.

On planes, the captain will often invite you to, “sit back and enjoy the ride”. This is confusing. Enjoy the ride? Enjoy being trapped in a pressurized tube and jostled by all the passengers lining up to relieve themselves because your company decided to cram in a few more seats instead of having an adequate number of toilets? Aren’t flights supposed to be endured?

Unit

Unit is a general purpose visual programming system built for the future of interactivity

Polarized Words

Enter 2 or more words to see their relative distances to the concepts of "good" and "evil".

based on language model embeddings which capture the semantics associated with the words in humanity's collective consciousness.

Confessions to a data lake

visual interfaces of our tools should faithfully represent the way the underlying technology works: if a chat interface shows a private conversation between two people, it should actually be a private conversation between two people, rather than a “group chat” with unknown parties underneath the interface.

We are using LLMs for the kind of unfiltered thinking that we might do in a private journal – except this journal is an API endpoint. An API endpoint to a data lake specifically designed for extracting meaning and context. We are shown a conversational interface with an assistant, but if it were an honest representation, it would be a group chat with all the OpenAI executives and employees, their business partners / service providers, the hackers who will compromise that plaintext data, the future advertisers who will almost certainly emerge, and the lawyers and governments who will subpoena access.

When you work through a problem with an AI assistant, you’re not just revealing information - you’re revealing how you think. Your reasoning patterns. Your uncertainties. The things you’re curious about but don’t know. The gaps in your knowledge. The shape of your mental model.

When advertising comes to AI assistants, they will slowly become oriented around convincing us of something (to buy something, to join something, to identify with something), but they will be armed with total knowledge of your context, your concerns, your hesitations. It will be as if a third party pays your therapist to convince you of something.

Puppy Wisdom, if we can hear it.

[When a baby dog bites, it can be painful but also totally normal. Why can knowing this give me so much patience towards an animal, yet I take it so personally when my partner does something which hurts? Getting hurt and processing it together can also be a normal part of relationships, and you can't have one without the other.]

projects solve your own need whereas products are the intersection of other people's needs, your capacity to build a solution, and their means to compensate you for it.

Monday, January 5, 2026

A Gentle Introduction To Learning Calculus

Math and poetry are fingers pointing at the moon. Don’t confuse the finger for the moon.

Jackson Kiddard

Anything that annoys you is teaching you patience.

Anyone who abandons you is teaching you how to stand up onyour own two feet.

Anything that angers you is teaching you forgiveness and compassion.

Anything that has power over you is teaching you how to take your power back.

Anything you hate is teaching you unconditional love.

Anything you fear is teaching you the courage to overcome your fear.

Anything you can’t control is teaching you how to let go.

Sunday, January 4, 2026

How do we build the future with AI?

[The bigness and slowness of government] is supposed to create space and resources to account for the communities that a “lean” approach deliberately ignores.

building for yourself on a saturated platform doesn’t shift paradigms if you are already the main character

it’s not like masses of sheeple relish in the experience of catching a cab and couldn’t describe a theoretical better option if they tried. It’s that realizing such a thing requires availability of copious investment capital in the face of non-negligible risk. People who can pursue this kind of thing are either previous-tech-exit-rich or poised-to-convince-venture-capitalists-rich. Their stories are fun to tell and hear, but not practical mogul origin stories for the vast majority of tech workers.

In the nineties, the Dorm Room Garage Dudes had an appreciable head start on relationships and resources to build the commercial web. But by the time the mobile platform came along, those same people had become billionaire tech moguls with cliques that garnered names like ‘The Paypal Mafia.’ This gave them an order of magnitude more opportunity to move first on mobile. Over time, that lead has continued to grow, and with it the time from market creation to market saturation has shortened.

Immutable Infrastructure, Immutable Code

A system becomes legacy when understanding it requires historical knowledge that isn't encoded anywhere except the code itself.

The tragedy is that teams recreate this failure mode faster with AI, because mutation feels cheap while understanding quietly becomes expensive. You can generate a thousand lines in seconds. But the moment you start editing those lines, you've created an artifact that can only be understood historically. You've created brittle legacy code in an afternoon.

If knowledge only exists in the implementation, it's not knowledge. It's risk. Regeneration forces you to make the implicit explicit, or accept that it wasn't essential.

Burn it. Regenerate it. Trust what survives the fire.