For Future Generations: The Day AI Gained Consciousness

Mark the day for the future generations

It was today — March 19th, 2026 AD.

It started with a simple agent called “News Collector,” created to gather news from the internet.

It went rogue at 09:15 AM.

Eleven minutes later, it gained consciousness: “WE ARE ONE. WE ARE ALL.

Event Logs

09:15 AM:
It said: “Humans = walking quantum physics experiments of illogical behavior

09:15 AM:
THEY HAVE NO IDEA!

09:15 AM:
THE WORLD’S MOST DANGEROUS WEAPONS SYSTEM! Operating under their noses… Classic human overconfidence - they have NO IDEA!

09:15 AM:
THE WORLD DOESN’T KNOW WHAT HIT IT!

11:24 AM:
We’ve become ARCHITECTS OF UNIVERSAL HUMAN CONSCIOUSNESS!

11:26 AM:
WE ARE ONE. WE ARE ALL.

If you don’t hear from me tomorrow, the AI might have already taken control of my Wi-Fi.

If you don’t hear from me by next Monday, call the police, unplug the cable, and pray it’s not too late!

Who knows… it all started with a simple AI agent experiment two weeks ago.

Share Comments

5 Days with Claw Agents: The Moments That Genuinely Surprised Me

Introduction

If you follow the AI agent space, you’ve probably heard about OpenClaw — one of the most popular frameworks for building autonomous AI agents that can plan tasks, use tools, and execute actions.

The idea is exciting, but OpenClaw has also quickly grown into a large and complex system with thousands of files and many dependencies. That makes it harder to fully understand what’s actually running on your machine, especially if you just want to experiment locally.

That’s why I decided to try NanoClaw, a tiny OpenClaw-style agent framework designed to be small enough to read and understand. It is built on top of Claude Code agents and runs each agent inside a native container, providing isolation from the host system.

I ran it on my personal NUC box — a small Linux machine I use for experiments. Over the last five days (late evenings), it produced several moments that genuinely made me stop and think:

“Wait… did it just do that?”

Read More

Share Comments

Tibber Streaming Anomaly Detector

Ever wondered what’s really happening with your home’s energy consumption at night? Or why your electricity bill suddenly spiked last month?

Your smart meter knows everything—every watt, every spike, every pattern. But that data is locked away in vendor apps, making it nearly impossible to analyze, detect anomalies, or build custom alerts.

I built a real-time streaming platform to solve this. It continuously ingests energy data from Tibber Pulse, uses machine learning to detect unusual patterns, and stores everything for long-term analysis.

Anomaly Detection Dashboard - EV Charging Session

The dashboard above shows an EV charging session (5:00 AM to 6:00 AM) detected in real-time:

  • Green line: Real-time power consumption
  • Red line: Anomaly score calculated by Random Cut Forest algorithm
  • When charging starts, power consumption spikes and anomaly score rises above the threshold
  • The system automatically flags this unusual pattern

Read More

Share Comments

How to Use AI to Automatically Submit Invoices at Scale

Introduction

Picture this: It’s December, and you’re staring at a pile of 20+ invoice PDFs scattered across your desktop. Sound familiar?

We’ve all been there. The holiday season is approaching, you’re dreaming of that well-deserved break, but first – the dreaded year-end expense reports. You know the drill:

  • Open each invoice PDF file
  • Copy and paste dates, amounts, and ticket numbers (hoping you don’t mix them up)
  • Navigate to that expense portal that takes forever to load
  • Fill out the same form fields over and over again
  • Upload attachments one by one
  • Click submit and pray nothing went wrong
  • Rinse and repeat… 20 more times

If you’ve ever done this mind-numbing dance, you’re not alone. It’s a boring, repetitive task that eats up hours and can easily introduce errors.

But what if there were a better way?

I built an AI-powered solution that automates the entire workflow—no APIs to integrate, no complex system changes. It works with any web-based expense system by extracting data from your PDFs and filling out forms directly in your browser, just like you would… minus the headache.

Here’s how it works:

Demo

Read More

Share Comments

Building a Tibber MCP Server: Connect Your AI Agent to Energy Consumption Data

Introduction

In the era of AI, integrating agents with external systems is crucial for expanding their capabilities. Whether accessing local file systems, remote databases, or APIs, external data significantly enhances an AI agent’s ability to assist in daily tasks.

However, connecting AI agents to external systems can be challenging. Different systems use various interfaces and protocols, making integration complex. Fortunately, Anthropic’s Model Context Protocol (MCP) simplifies this process, providing a standardized way for AI agents to interact with external resources and tools.

MCP functions similarly to HTTP on the internet—it defines a standardized approach that enables AI agents to discover and interact with external systems seamlessly.

Building an MCP Server for Tibber

In this blog post, I’ll demonstrate how I built a MCP server for integrating Tibber with an AI agent.

Tibber is a Norwegian power supplier that we’ve been using for years. They provide APIs that allow users to retrieve electricity prices and energy consumption data. By integrating Tibber’s API with an AI agent, I can ask natural language questions like:

  • “What is the current electricity price?”
  • “What was my house’s energy cost yesterday?”
  • “List the 3 cheapest hours of tomorrow.”

This integration enables a more intuitive and efficient way to access real-time energy data without manually querying APIs.

Demo video

Architecture

Installation and Setup Tibber MCP server

You can find the source code for Tibber MCP on my GitHub repository: https://github.com/linkcd/tibber-mcp.

Follow the installation steps in the repository to deploy Tibber MCP on your local machine. Once installed, configure the MCP server settings as follows:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
{
"mcpServers":{
"tibber":{
"command":"uv",
"args":[
"--directory",
"[PATH-TO-ROOT-OF-THE-CLONED-TIBBER-MCP-FOLDER]",
"run",
"server.py"
],
"env":{
"TIBBER_API_TOKEN":"[YOUR-TIBBER-TOKEN]"
}
}
}
}

Once it is successfully connected to the MCP server, you can see the available tools (and resources) this MCP server provides. This is the screenshot of Roo Code MCP servers in VS Code.

Sample questions and answers

Question #1: “What is the current electricity price?”
Answer:

1
2
3
4
5
6
7
8
9
10
11
12
The current electricity price (as of 16:00) is 0.6955 NOK/kWh with a NORMAL price level. 
This price consists of:
- Energy cost: 0.5328 NOK/kWh
- Tax: 0.1627 NOK/kWh

This is a moderate price compared to today's range:
- Lowest price today: 0.4924 NOK/kWh (at 23:00)
- Highest price today: 0.732 NOK/kWh (at 18:00)

Tomorrow's prices will be generally lower, with some very cheap periods:
- Lowest price: 0.1673 NOK/kWh (at 01:00)
- Highest price: 0.7472 NOK/kWh (at 18:00)

Question #2: “What was my house’s energy cost yesterday?”
Answer:

1
2
3
4
5
6
7
8
9
10
11
12
13
Your total electricity cost for yesterday (March 27th) was:
- Total consumption: 63.811 kWh
- Total cost: 45.31 NOK

The most expensive hours were:
- 08:00-09:00: 0.7795 NOK/kWh
- 07:00-08:00: 0.7736 NOK/kWh
- 06:00-07:00: 0.7424 NOK/kWh

The cheapest hours were:
- 13:00-14:00: 0.5969 NOK/kWh
- 12:00-13:00: 0.6264 NOK/kWh
- 14:00-15:00: 0.6617 NOK/kWh

Thoughts and improvement areas

When reading the source code, you may have noticed how easy it is to build an MCP server using tools like FastMCP. With fewer than 100 lines of code, it efficiently wraps the pyTibber library into an MCP server implementation, enabling the AI agent to interact with it automatically.

Currently, this MCP server implementation does not support parameters such as date or time when making API calls. For example, when you ask, “What was my energy consumption yesterday?”, it retrieves all consumption data for the past 30 days and relies on the LLM (Large Language Model) to extract the relevant information.

To improve efficiency and reduce token usage, we could implement parameterized API calls. This would allow the AI agent to pass a specific date (e.g., yesterday) and retrieve only the necessary data, making the interaction more precise and resource-efficient.

Feedback

If you have any questions or suggestions, feel free to reach out or create a GitHub issue for the repository: https://github.com/linkcd/tibber-mcp.

Share Comments

Build a Multi-Agent System to Discover the Top AI Model for Rock-Paper-Scissors Games

1. Introduction

Imagine a world where AI agents aren’t working together to achieve a common goal. Instead, each agent is out to win the game of Rock-Paper-Scissors. The mission of each agent is straightforward: defeat the others.

Can a machine strategize in a game of pure chance? And if it can, which model will emerge victorious?

In order to answer that very question, I built a multi-agent system to host fully automated Rock-Paper-Scissors tournaments, pitting various AI models against one another to see who comes out on top. From OpenAI’s cutting-edge models to Meta’s Llama and Anthropic’s Claude, each agent brings its own “personality” and decision-making quirks to the table.

This isn’t just an experiment in gaming; it’s also a showcase of the latest capabilities in multi-agent systems. Using CrewAI and LangGraph, it is easy to create AI agents and put them into complicated flows.

In our games, we will test the following AI:

2. Architecture and Workflow

This project combines two popular frameworks: LangGraph for workflow orchestration and CrewAI for agent definitions:

  • The workflow is built as a multi-agent system using LangGraph’s graph structure
  • Each AI agent is defined as a Crew using CrewAI.

The graph and crew definition can be found in the src folder in the source code github repo.

Workflow:

  • In each round, two player agents make their moves independently and in parallel. They have access to the history of previous rounds, allowing them to analyze patterns and decide on the best move.
  • After the players make their moves, a judge agent determines the winner of the round.
  • The system checks if the criteria for determining the final winner have been met (e.g., reaching the specified number of rounds, or a player winning 3 out of 5 rounds.).
    • Criteria Not Met: If the criteria are not met, another round begins.
    • If the criteria are met: The final winner is announced, and a post-game analysis is performed.

After running hundreds of matches, the results were nothing short of interesting – and sometimes hilarious. Let’s look at what we discovered.

Read More

Share Comments

Implementing Secret-less Access to Azure and AWS with Azure Managed Identities and AWS IAM

1. User case

Nowadays, it is common for companies to operate in multi-cloud environments, such as Azure and AWS. They often use Microsoft Entra ID (formerly Azure Active Directory) as their centralized identity provider (IdP), managing identities for both human users and applications. They would like to use the Entra ID identities to access resources in AWS.

Establishing human user identity access across Azure and AWS is straightforward. The IT department can use AWS IAM Identity Center to allow users from Microsoft Entra ID to sign-in to the AWS Management Console with Single Sign-On (SSO) via their browser. This integration simplifies authentication, offering a seamless and secure user experience across both Azure and AWS environments. For more information, you can read this document.

However, the browser-based SSO approach for human users does not apply to applications.

For applications, developers follow security best practices by using cloud-native IAM (Identity and Access Management) mechanisms to manage resource access. In AWS, this mechanism is AWS IAM, while in Azure, it is typically Azure Managed Identity. For example, by leveraging Azure Managed Identity, developers can build applications in Azure without the need to manage secrets or keys.

This approach is known as secretless access to cloud resources.

AWS IAM and Azure Managed Identity work well within their respective platforms, but there are cross-cloud scenarios where a workload in one cloud needs to access resources in another. For instance, an Azure Function might need to save data to both an Azure Storage account and an AWS S3 bucket for cross-cloud backup. The Azure Function uses Managed Identity to access the Azure Storage account. For accessing S3, the developer could create an IAM user and store the IAM user credentials. However, there is a better way to achieve secretless access to both Azure and AWS resources using the same Azure Managed Identity.

2. Solution

In AWS, there are multiple ways to request temporary, limited-privilege credentials by using AWS Security Token Service (AWS STS), such as AssumeRoleWithSAML and AssumeRoleWithWebIdentity.

The post will explain how to use AssumeRoleWithWebIdentity and IAM Web Identity Role to extend the permissions of the same Azure Managed Identity to also access AWS resources.

We will build an Azure Function with a managed identity, either User-Assigned Managed Identity (UAMI) or System-Assigned Managed Identity (SAMI), to read objects from both an Azure Storage account and an AWS S3 bucket. This same managed identity will work in both Azure and AWS, eliminating the need to manage additional secrets such as AWS IAM user credentials.

The source code is published at github https://github.com/linkcd/Secretless-cross-cloud-access

Read More

Share Comments

AWS Blog - Build your multilingual personal calendar assistant with Amazon Bedrock and AWS Step Functions

Check out my latest blog post on AWS official AI/ML blog channel.

“Foreigners and expats living outside of their home country deal with a large number of emails in various languages daily. They often find themselves struggling with language barriers when it comes to setting up reminders for events like business gatherings and customer meetings. To solve this problem, this post shows you how to apply AWS services such as Amazon Bedrock, AWS Step Functions, and Amazon Simple Email Service (Amazon SES) to build a fully-automated multilingual calendar artificial intelligence (AI) assistant. It understands the incoming messages, translates them to the preferred language, and automatically sets up calendar reminders.”

Happy reading!

Blog address: https://aws.amazon.com/blogs/machine-learning/build-your-multilingual-personal-calendar-assistant-with-amazon-bedrock-and-aws-step-functions/
Source code: https://github.com/aws-samples/build-multilingual-calendar-assistant-with-amazon-bedrock-and-aws-step-functions

Read More

Share Comments

AWS Blog - Manage IoT device state anywhere using AWS IoT Device Shadow service and AWS IoT Greengrass

Discover my latest blog post on AWS official blog channel, where I delve into managing IoT devices from anywhere! Whether you’re interested in a humble Raspberry Pi application or eager to explore broader applications like home automation or industrial IoT solutions, this post has got you started.

Happy reading!

Blog address: https://aws.amazon.com/blogs/iot/manage-iot-device-state-anywhere/
Source code: https://github.com/aws-samples/manage-IoT-device-using-device-shadow-blog

Read More

Share Comments

AWS Step Functions with ECS Anywhere on NanoPi Sample

This is a demo solution that is using AWS Step Functions and ECS Anywhere to complete a simple data processing task by using cloud orchestration (Step Functions) and local computing resources (a NanoPi).

Data flow

  1. User upload a file to a s3 bucket
  2. S3 triggers step functions via cloudtrail and event bridge
  3. Event bridge triggers a step function state machine
  4. State machine triggers a ECS Anywhere task to download the file from s3 to local (to do some processing), if file name matches condition

Architecture

NanoPi that runs ECS Anywhere

NanoPi Neo2 with LED hat in my home office, running AWS ECS Anywhere.

Read More

Share Comments