The duty to care

2026 is starting and I wish we all set one intention for the new year: the duty to care.

Because let’s be honest: we know that digital products can create harm, a lot of it. And all of us working with them need to take the responsibility for their consequences. 

It’s time to stop moving fast and breaking things, and reflect instead about the impact of the products we build, because they can break much more than a couple of lines of code. They can make us connected, but also extremely lonely in front to our devices. They can help make life easier, bring music, movies, fun everywhere we go, but also anxiety, and fear of judgment in our pockets. We start to see the effect of socials on teenagers, hate speech, and I am afraid that we have not seen yet the extreme consequences of digital products at scale. Especially when the new AI-powered chatbots are entering the party.

In the name of speed, adoption, and market-share, all the ethical checks are bypassed, and a potential weapon is released to the public, without caring about the consequences. We are witnessing a new level of digital abuse.

If my word choice seems a bit too exaggerated, let’s think about what happened on X last week.

With a simple prompt, people could create sexualized  images. Grok undressed women, and children, without their consent. What is that if not a terrible manipulation weapon?

And the worse part is that this is not something that just happened. This is not something that “the users did” and could have not been predicted. Let’s be honest: any single one of us working with digital project development and user behavior would have listed that user case on the risk list. This is not an incident, it is a choice.

A choice made by the people developing the system. They chose engagement over morality. Speed over decency, and pushing through over taking responsibility.

It is something that has already happened, and is happening again, only we are pushing the boundaries even further. To a place where they should not be.

But as they chose, so can we.

We can add a “ethical” criteria close to the feasibility, usability, lovability ones. We can make our voice heard. We can stop using those products.

Because caring is not an option, caring is how we take responsibility. And I’d love to see more team with care as product requirement for 2026.

PS if you are interested in the mechanics behind why the algos behind the chatbots are getting more and more out of control in the name of profit, I highly recommend reading Empire of AI by Karen Hao.

Previous
Previous

Why even smart people need to be told what to do (sometimes)

Next
Next

How to avoid inside-out decisions…even when you think you’re data-driven