2025-12-10 13:01:57
Yes — BigQuery is a columnar, distributed, serverless analytical database.
It stores data in a columnar format (Capacitor), which makes it extremely fast for analytical queries, aggregations, and large scans.
If you fetch BigQuery data row-by-row in C#, and manually build arrays, it becomes:
Slow for large datasets
CPU-heavy
Memory-heavy
Inefficient for high-volume tables
Example of what you want:
export interface DealData {
Season: number[],
countryOfProduction: string[],
productionGroup: string[],
supplierName: string[],
dealTypes: string[]
}
Instead of retrieving many rows and aggregating in C#, BigQuery can directly return one row with arrays for each column using ARRAY_AGG.
Optimized BigQuery SQL
SELECT
ARRAY_AGG(Season) AS Season,
ARRAY_AGG(countryOfProduction) AS countryOfProduction,
ARRAY_AGG(productionGroup) AS productionGroup,
ARRAY_AGG(supplierName) AS supplierName,
ARRAY_AGG(dealTypes) AS dealTypes
FROM your_dataset.your_table;
using Google.Cloud.BigQuery.V2;
using Newtonsoft.Json;
public class DealData
{
public List<int> Season { get; set; } = new();
public List<string> countryOfProduction { get; set; } = new();
public List<string> productionGroup { get; set; } = new();
public List<string> supplierName { get; set; } = new();
public List<string> dealTypes { get; set; } = new();
}
public async Task<string> GetDealDataAsync()
{
BigQueryClient client = BigQueryClient.Create("your-project-id");
string query = @"
SELECT
ARRAY_AGG(Season) AS Season,
ARRAY_AGG(countryOfProduction) AS countryOfProduction,
ARRAY_AGG(productionGroup) AS productionGroup,
ARRAY_AGG(supplierName) AS supplierName,
ARRAY_AGG(dealTypes) AS dealTypes
FROM your_dataset.your_table
";
var result = client.ExecuteQuery(query, null);
var row = result.First();
DealData dealData = new DealData
{
Season = row["Season"].ToList<int>(),
countryOfProduction = row["countryOfProduction"].ToList<string>(),
productionGroup = row["productionGroup"].ToList<string>(),
supplierName = row["supplierName"].ToList<string>(),
dealTypes = row["dealTypes"].ToList<string>()
};
return JsonConvert.SerializeObject(dealData);
}
2025-12-10 13:00:00
I originally posted this post on my blog.
I haven't stumbled upon any client or company forcing me to use AI.
That's not a reality for everybody in the industry. Recently I found this Reddit post from a coder who has lost interest after being forced to use AI, even he was tracked:
"Within the span of maybe 2 months my corporate job went from "I'll be here for life" to "Time to switch careers?" Some exec somewhere in the company decided everyone needs to be talking to AI, and they track how often you're talking with it. I ended up on a naughty list for the first time in my career, despite never having performance issues. I explain to my manager and his response is to just ask it meaningless questions."
That post rang a bell! It reminded me of a conversation I had recently.
These days, I caught-up with some of my ex-coworkers, and one story stood up.
After the usual chit-chat, one of them shared that his company was encouraging them to use AI, not so strong like the guy from Reddit. Maybe productivity was the official reason.
But the real reason?
Turns out, one of the company founders was also investing in an AI startup. And guess which AI tool they were encouraging people to use.
Just like I found the other day, if you think of AI as just another subscription company pushed for profit, all the hype starts to make more sense.
The real driver isn't productivity, but financial interest.
It's easy to get caught up in the AI hype and forget coding is more than shipping crappy lines of code fast.
But coding is also about clear communication, thoughtful problem-solving, and knowing when to say no. None of that shows up in AI usage metrics.
And that's why wrote, Street-Smart Coding: 30 Ways to Get Better at Coding, to share the skills I wish I'd learned earlier, the ones that help you become a confident, hype-proof coder.
2025-12-10 13:00:00
Leave a comment below to introduce yourself! You can talk about what brought you here, what you're learning, or just a fun fact about
yourself.
Reply to someone's comment, either with a question or just a hello. 👋
Come back next week to greet our new members so you can one day earn our Warm Welcome Badge!
2025-12-10 12:50:53
step1:Hyper Text Markup Languages.
step2:Creating a web pages and content of the web pages.
step3:<!DOCTYPE HTML> - Version of the html Document
step4:<html> -Root element
step5:<head> - meta information about the HTML page
step6:<title>- title of the HTML pages
step7:<body> - heading,para,img,table,list,etc.
step8:<h1>WELCOME</h1>- Elements
step9:standalone HTML tags - <br>,<img>,<link>,<hr>,etc..
step10:latest version of HTML is HTML5.
2025-12-10 12:49:24
Over the past month, I decided to dive seriously into data science, with one clear mission:
learn how to analyze real data using R like a professional.
To challenge myself, I worked on a complete e-commerce analytics project.
It ended up being demanding, sometimes frustrating, but incredibly rewarding.
Here is what I learned, how I progressed, and why this one-month experience became a turning point in my journey.
At first, R looked unusual and a bit intimidating.
But once I started using the right libraries, everything became more natural:
dplyr for data manipulation
ggplot2 for visualization
readxl and read.csv for importing data
forecast for my first time-series predictions
Writing pipelines with %>% even became enjoyable.
It felt like guiding the computer step-by-step through a clear thought process.
A major lesson from this project: good organization matters.
I created separate scripts for each step of the analysis:
main.R
This approach taught me how data analysts build reproducible workflows, just like in professional environments.
I finally understood why people say:
“80% of data science is data cleaning.”
This project involved everything:
Fixing these issues helped me develop a deeper sense of how real datasets behave — and how to make them usable.
Once the data was clean, everything became much more exciting.
I analyzed:
Then came the charts:
line plots, barplots, scatter plots, heatmaps, and more.
This was the moment where the story hidden inside the data finally emerged.
Seasonal patterns showed up, certain categories dominated, and long delays clearly led to more cancellations.
The numbers weren’t just numbers anymore — they were insights.
Exploring time-series forecasting with auto.arima() was one of the most rewarding parts of the project.
I transformed the monthly revenue into a time series and predicted the next quarter.
Seeing R generate future values based on historical data made me feel like I had reached a new level:
“I’m really doing data science now.”
This project was much more than a homework assignment.
It was a full immersion into the world of data science with R.
I learned how to:
Most importantly, this one-month journey gave me confidence and motivation to continue.
And honestly?
This is just the beginning.
2025-12-10 12:48:38
Current AI agents control software systems (SaaS, internal tools, workflows) through vision, DOM heuristics, or textual reasoning.
These methods fail because modern applications contain:
From an AGI perspective:
The environment is not observable, not stable, and not structurally encoded.
→ No agent can reliably build a functional world model of such systems.
Thus, AI cannot plan, explain, or take actions safely.
This is not a model limitation—it is a representation problem.
Manifesto provides a formal, declarative world-model interface for software systems.
It exposes the semantics, state transitions, and action space of an application in a deterministic, machine-interpretable structure.
Instead of forcing the model to infer business rules from pixels, DOM, or natural language, Manifesto makes those rules explicit:
Domain Semantics → Snapshot → Expression-based Rules → Action Effects
In other words:
Manifesto transforms software from a black-box UI into a white-box, symbolic environment.
This is the missing substrate required for reliable AGI agents to operate real-world software.
Manifesto formalizes a domain into four machine-interpretable namespaces:
data.*
Mutable user-level inputs.
state.*
System-level or async state (e.g., loading, error, fetched lists).
derived.*
Deterministic values computed via a pure Expression DSL:
actions.*
Side-effectful behaviors executed through structured Effect graphs:
Action preconditions represent domain policies (i.e., semantic constraints).
Manifesto’s core runtime:
{ data, state, derived, validity, timestamp, version }
This creates a stable, inspectable, reproducible environment—a property no UI or DOM-based system has today.
Manifesto exposes a unified agent-facing representation:
This enables:
No inference from UI is needed; the agent receives a structural world model similar to RL environments in research—but directly connected to real software.
Manifesto provides the symbolic substrate AGI systems have lacked:
LLMs reason over these structures much more reliably than raw UI observations.
Existing AI stacks:
LLM ↔ (DOM / Vision / Heuristics) ↔ Application
Manifesto replaces the brittle middle layer with a formal, semantic interface:
LLM ↔ Manifesto World Model ↔ Application
Because the agent knows:
It gains an unprecedented level of controllability and safety.
Every SaaS domain becomes a standardized environment:
→ Agents can transfer patterns across domains.
→ A universal semantic layer emerges.
AGI does not require an LLM to infer the structure of software systems.
Software systems already have structure—it simply isn’t exposed.
Manifesto exposes that structure.
By doing so, it provides:
This is the missing link that allows AI to act, not just predict.
Manifesto is a formal semantic interface that transforms real software systems into deterministic, explainable world models—enabling safe and generalizable AI agents to operate them.
you can access my experimental project repo try it yourself:
https://github.com/manifesto-ai/core