MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

Crossed Wires

2025-12-20 09:53:10

Advent of Code 2024 Day 24

Part 1

Hurry up and wait...and see

This feels like a queue-themed challenge:

  • Establish values
  • Generate an inventory of processable steps
  • Execute steps
  • Repeat until inventory is complete

I'm concerned there's something in the rules that I may miss or not fully understand.

Hopefully stepping through the first example will be enough to gain confidence to write an algorithm.

Understanding how to architect the example input

Here it is again:

x00: 1
x01: 1
x02: 1
y00: 0
y01: 1
y02: 0

x00 AND y00 -> z00
x01 XOR y01 -> z01
x02 OR y02 -> z02

I can imagine a dictionary to store the initial values:

{
  x00: 1,
  x01: 1,
  x02: 1,
  y00: 0,
  y01: 1,
  y02: 0
}

Then each gate needs some id other than its type, but needs to store it's type, inputs and output.

Hmmm...

[
  {
    'type': 'AND',
    'inputs': ['x00', 'y00'],
    'output': 'z00',
    'done': false
  },
  {
    'type': 'XOR',
    'inputs': ['x01', 'y01'],
    'output': 'z01',
    'done': false
  },
  {
    'type': 'OR',
    'inputs': ['x02', 'y02'],
    'output': 'z02',
    'done': false
  }
]

That might work. It at least seems to have all pertinent information about a gate, and the index serves as a unique id.

Oh, and I will need to store and know the values of all output gates that didn't get initial values:

{
  x00: 1,
  x01: 1,
  x02: 1,
  y00: 0,
  y01: 1,
  y02: 0,
  z00: null,
  z01: null,
  z02: null
}

Understanding how to process the example input's gates

If I think of this task as repeatedly checking the entire list of gates for only the ones whose inputs have values:

For each gate
  If both inputs have values
    Add to the list

Out of the three, all gates' inputs have values.

So, the list will have all gates:

[0, 1, 2]

I can iterate through each, process the inputs, save the value in the output, and mark them as done.

  • z00 gets 0 because 1 AND 0 is 0
  • z01 gets 0 because 1 XOR 1 is 0
  • z02 gets 1 because 1 OR 0 is 1

I think I understand.

I hope I understand.

I'm ready to code.

Turning puzzle input into a state machine

I plan to generate a dictionary and an array of dictionaries.

First, the dictionary of given initial values:

let [values, gates] = input.split('\n\n')
values = values.split('\n').reduce((obj, item) => {
    let [name, value] = item.split(': ')
    obj[name] = +value
    return obj
}, {})

As expected, I see this as output:

{ x00: 1, x01: 1, x02: 1, y00: 0, y01: 1, y02: 0 }

Next, the list of dictionaries filled with each gate's parameters:

gates = gates.split('\n').reduce((list, gate) => {
    let [input1, bool, input2, , output] = gate.split(' ')
    if (!(output in values)) {
        values[output] = null
    }
    list.push({
        'type': bool,
        'inputs': [input1, input2],
        'output': output,
        'done': false
    })
    return list
}, [])

This creates the list of objects and updates the list of values to include the uninitialized values - most or all starting with z.

And I see the expected output:

[
  { type: 'AND', inputs: [ 'x00', 'y00' ], output: 'z00', done: false },
  { type: 'XOR', inputs: [ 'x01', 'y01' ], output: 'z01', done: false },
  { type: 'OR', inputs: [ 'x02', 'y02' ], output: 'z02', done: false }
]

Next, I need to make functions for each of the three booleans:

function AND (a, b) {
    if (a == 1 && b == 1) {
        return 1
    } else if (a == 0 || b == 0) {
        return 0
    }
}
function OR (a, b) {
    if (a == 1 || b == 1) {
        return 1
    } else if (a == 0 && b == 0) {
        return 0
    }
}
function XOR (a, b) {
    if (a !== b) {
        return 1
    } else if (a == b) {
        return 0
    }
}

Pretty straightforward, though maybe not concise. Still, very readable!

Time to start putting this all together with a loop and a queue.

The harder part: processing each set of ready gates

A gate only processes its inputs when both have a value.

The way it seems, only a few gates will have values at first. Then, as more outputs get values, more gates will become ready to process.

I need to write an algorithm that filters the full list for:

  • gates that are not done as per my object's property
  • gates whose inputs both have a non-null value

Then processes each gate's boolean and sets the output's value.

Seems straightforward.

Here I go!

let gatesRemaining = gates.length
while (gatesRemaining > 0) {
    gates.forEach(
      gate => {
        if (!gate.done) {
            if (
                 typeof values[gate.inputs[0]] == 'number' && 
                 typeof values[gate.inputs[1]] == 'number') 
            {
                values[gate.output] = eval(
                   `${gate.type}(values['${gate.inputs[0]}'],
                                 values['${gate.inputs[1]}'])`
                )
                gate.done = true
                gatesRemaining--
            }
        }
      }
    )
}

The eval is a bit gross, but it does the job of calling the correct boolean function with the two inputs.

After a bit of debugging my sloppy array accessor syntax, I see the expected result:

  • values being added correctly
  • a counter decrementing correctly
  • a program ending thanks to a counter reaching 0

Time to check on the larger example input.

It works!

Piecing together the decimal

At least for the examples, I have the right keys with the right values.

Now I need to extract just the z-starting keys, put them in order with the values, and join the values to get the decimal.

This may be the toughest part!

Well, not toughest, but definitely a long statement:

let decimal = parseInt(
  Object.keys(values).filter(
    el => el[0].indexOf('z') == 0
  ).sort().reverse().map(el => values[el]).join(''), 
  2
)
  • Make a list of the key names in my dictionary of values
  • Keep only the ones starting with z
  • Sort them alphabetically
  • Reverse their order
  • Replace each one with that key's associated value
  • Join them all into a string of 1s and 0s
  • Convert that binary string into a decimal

It works!

I see 2024 in my output!

Now, do I think it will work on my puzzle input?

Fingers are tightly crossed.

Although I'm fully expecting it to never terminate or generate an error for something I overlooked.

Time to find out...

Running my algorithm on my puzzle input

...

Wow! It ran and finished instantly!

And it output a big 'ol number!

And it was the correct answer!

Woohoo!!!

I'm so proud of myself for earning these gold stars on days above 20.

Ok, time to see what Part 2 demands.

Part 2

Confusion and intimidation

There's a lot more explanation here.

A lot about binary numbers.

And swapping.

And...more.

I'm confused.

And knowing how large my puzzle input is, I'm fully intimidated to even attempt understanding and coding anything.

So, I sadly bid today farewell without attempting Part 2.

It's a bummer, but my head hurts just reading each short paragraph in this additional explanation.

One gold star earned. Good riddance, Day 24.

Secure Secrets in Google Apps Script

2025-12-20 09:43:43

  • The Problem: Apps Script lacks specific support for secrets, leading to hardcoded secrets.
  • The Solution: Use Properties Service for config and Secret Manager for high-value secrets.

Unlike many modern development environments that support .env files or have built-in secret management deeply integrated into the deployment pipeline, Google Apps Script has historically left developers to fend for themselves.

It is all too common to see API keys, service account credentials, and other sensitive data hardcoded directly into Code.gs.

Stop doing this.

Hardcoding secrets makes your code brittle and insecure. If you share your script or check it into source control, your secrets are compromised.

Fortunately, there are ways for me to handle configuration and secrets securely in Apps Script: Properties Service and Google Cloud Secret Manager.

For service accounts specifically, I can often avoid keys entirely by using Service Account Impersonation.

Script Properties

For general configuration, environment variables, and non-critical keys, the built-in PropertiesService is the easy choice. It allows me to store key-value pairs that are scoped to the script but not visible in the code editor.

I can set these manually in the editor (Project Settings > Script Properties) or programmatically.

Script Properties in Apps Script Editor

Script Properties in Apps Script Editor

Here is how I retrieve and parse them effectively. Note that getProperty always returns a string, so I need to handle type conversion myself.

function main() {
  // Get the Script Properties
  const scriptProperties = PropertiesService.getScriptProperties();

  // Properties are Strings
  const API_KEY = scriptProperties.getProperty("API_KEY");
  console.log(API_KEY);

  // Properties can be parsed as Number
  const A_NUMBER = Number.parseFloat(scriptProperties.getProperty("A_NUMBER"));
  console.log(A_NUMBER, typeof A_NUMBER);

  // Properties can be JSON strings
  const SERVICE_ACCOUNT_KEY = JSON.parse(
    scriptProperties.getProperty("SERVICE_ACCOUNT_KEY") ?? "{}",
  );
  console.log(SERVICE_ACCOUNT_KEY);
}

Google Cloud Secret Manager

For high-value secrets—like database passwords, API keys, or service account keys, Script Properties might not be enough. They are still accessible to anyone with edit access to the script.

In these cases, I leverage the Google Cloud Secret Manager. Since every Apps Script project is backed by a default Google Cloud project (or a standard one linked to it), I can use the UrlFetchApp to retrieve secrets directly from the GCP API.

This approach requires:

  1. Enabling the Secret Manager API in the GCP project.
  2. Granting the Secret Manager Secret Accessor role (roles/secretmanager.secretAccessor) to the user running the script. (If you created the secret, you should have this role already.)
  3. Adding the standard https://www.googleapis.com/auth/cloud-platform scope to appsscript.json.
{
  "timeZone": "America/Los_Angeles",
  "dependencies": {},
  "exceptionLogging": "STACKDRIVER",
  "runtimeVersion": "V8",
  "oauthScopes": [
    "https://www.googleapis.com/auth/script.external_request",
    "https://www.googleapis.com/auth/cloud-platform"
  ]
}

Here is a reusable function to fetch and decode secrets on the fly:

function main() {
  // ... existing code ...

  // Use Google Cloud secret manager
  // Store the CLOUD_PROJECT_ID in Script Properties to keep the code clean
  const projectId =
    PropertiesService.getScriptProperties().getProperty("CLOUD_PROJECT_ID");
  if (!projectId) {
    throw new Error(
      "Script property 'CLOUD_PROJECT_ID' is not set. Please add it to Project Settings.",
    );
  }
  const MY_SECRET = getSecret(projectId, "MY_SECRET");

  console.log(MY_SECRET);
}

/**
 * Fetches a secret from Google Cloud Secret Manager.
 * @param {string} project - The Google Cloud Project ID
 * @param {string} name - The name of the secret
 * @param {string|number} version - The version of the secret (default: 'latest')
 * @returns {string} The decoded secret value
 */
function getSecret(project, name, version = "latest") {
  const cache = CacheService.getScriptCache();
  const cacheKey = `secret.${name}.${version}`;
  const cached = cache.get(cacheKey);
  if (cached) return cached;

  const endpoint = `projects/${project}/secrets/${name}/versions/${version}:access`;
  const url = `https://secretmanager.googleapis.com/v1/${endpoint}`;

  const response = UrlFetchApp.fetch(url, {
    headers: { Authorization: `Bearer ${ScriptApp.getOAuthToken()}` },
    muteHttpExceptions: true,
  });

  if (response.getResponseCode() >= 300) {
    throw new Error(`Error fetching secret: ${response.getContentText()}`);
  }

  // Secrets are returned as base64 strings, so we must decode them
  const encoded = JSON.parse(response.getContentText()).payload.data;
  const decoded = Utilities.newBlob(
    Utilities.base64Decode(encoded),
  ).getDataAsString();

  // Cache for 5 minutes (300 seconds)
  cache.put(cacheKey, decoded, 300);
  return decoded;
}

Wait, did we just go in a circle?

Yes, I am suggesting you store the Project ID of your secrets vault inside the Script Properties where we used to carelessly toss your API keys. But unlike a raw credential, a Project ID is just a pointer. Think of it as the difference between publicly listing your home address versus leaving your front door unlocked. People can know where you live, but without permissions, they can’t come in!

Why Caching Matters

Retrieving a secret via UrlFetchApp involves an external network request, which adds latency to your script’s execution. Furthermore, Google Cloud Secret Manager has usage quotas and costs associated with API calls.

In the getSecret function above, I use CacheService to store the decoded secret. This ensures that subsequent calls within the same environment don’t trigger unnecessary network overhead, making the script significantly faster and more resilient to API rate limits.

Why go this far?

Using Secret Manager provides audit logging, versioning, and finer-grained IAM controls. By combining PropertiesService for configuration and Secret Manager for actual secrets, I can keep Code.gs clean and secure.

Additional Reading

Secure Secrets in Google Apps Script © 2025 by Justin Poehnelt is licensed under CC BY-SA 4.0

My experience with the ninjas of Microsoft 🥷🏾

2025-12-20 09:40:28

Hackathon

The hackathon at Westlake Brewery featured inclusive app development and unique tasks, such as spinning a challenge wheel. A memorable challenge was telling the story behind Oktay Sari’s nickname, the “Dutch Cowboy.”

During the hackathon, I was panicking—my team was mostly security-driven, and I didn’t know where to start. Our captain, Ugur Koc, had a clear plan: build a website with an AI chatbot that matches users based on their time zone, communication style, and work habits. The chatbot answers queries like, 'How well do I work with Artist?' or gives advice on adapting to different team styles.

Ugur introduced the T3 app, a tool that helps build the website's frontend (the part users see and use) using programming languages suitable for both server-side and client-side development.

For the AI agent, we used OpenAI tools to enable chatbot functionality. To store users’ preferences, we set up a PostgreSQL database and managed it through Supabase, a platform that makes it easier to use the database online.

The team decided that all participants should log in using their hackathon credentials via Entra ID (an identity service). We requested permission to register the app, specifying that it should function as a single-page application.

We enabled Row-Level Security (RLS) for the database, so only authenticated users could enter their own data and only view information belonging to other users, ensuring data privacy and security.

Bought a domain named big-corporation.org.

Overall, I learned a lot, got valuable career advice, and felt more confident by the end. After we wrapped up, the event transitioned to presentations and awards. The energy carried over as the conference continued with more sessions and learning opportunities.

And here is a screenshot of the app/website

At the end of the event, we won first place! The team decided I should take home the golden clippy, which was a great way to conclude the hackathon and move into the next phase of learning at the conference.

Day 1

We started the opening with actual ninjas! I was surprised and out of breath watching the flip and do sword play.

Navigating the New Frontier: Embracing Cloud-Native and AI for Enhanced Security and Productivity

This talk was about how Copilot can integrate with Intune to make workflows more efficient. Some of the things that piqued my interest and I learned about were:


The Copilot agent can be used as a change review assistant. It analyzes your change request and provides recommendations based on it; for example, if you request adding a firewall rule, the agent could say, ‘Hey, this might break such and such.’

Use the agent to offboard BYODs from inactive Entra ID users in the tenant.

There will be a dashboard that explains to admins what the agent wants to do, and you or other admins can approve or deny the tasks.

The agent will not have its own separate permissions; instead, it will execute any tasks you approve or automate on your behalf.

Here is a link to learn more: https://techcommunity.microsoft.com/blog/microsoftintuneblog/whats-new-in-microsoft-intune-at-ignite/4471043.

Brains, Bloopers, and Bytes: The Fun Side of Neurodiversity/Neurodegeneration in Tech

Somesh Pathak’s openness about Parkinson's Disease resonated with me, as I also have ADHD, and public speaking gives me anxiety. I admired his bravery, and his friends surprised him with gifts and a cake.

The interaction was very wholesome, and it resonated with me. I hope to experience that level of support from friends one day.

Link to slides: https://github.com/mobilejon/WorkplaceNinjasUS/blob/main/Brains%2C%20Bloopers%2C%20and%20Bytes.pptx

DIY Intune Tools: PowerShell + Graph = Admin Superpowers

In this session, I learned how Ugur Koc and Jannik Reinhard built tools using PowerShell and the Graph API to streamline security tasks.

Everything in Intune uses Graph AP. Every button click, every data point you see in the Intune portal is powered by Microsoft Graph behind the scenes.

The graph API structure isgraph.microsoft.com/[version]/[resource]?[parameters]

While v1.0 is officially supported, beta endpoints provide much more data and functionality. Most of the Intune portal itself uses beta endpoints.

There are three Ways to discover API endpoints:

  1. Microsoft Graph Explorer(developer.microsoft.com) - Test queries, see raw data, and generate code snippets
  2. Browser Developer Tools - Open Network tab (F12), perform actions in the portal, and copy the exact API calls
  3. Graph X-Ray browser extension - Automatically generates PowerShell code from your portal actions

Some authentication best practices for scripting in PowerShell:

  1. Managed Identity(best for Azure resources) - No secrets, lifecycle tied to resource, most secure
  2. Service Principal with Certificate- For non-Azure environments
  3. User Authentication- Only for local, one-off scripts

Managed identities can't have permissions added via the Azure portal UI - you must use PowerShell scripts to assign Graph permissions.

Use PowerShell SDK when:

  1. You need quick, simple authentication.
  2. Token refresh automation is important.
  3. You're comfortable managing module dependencies.
  4. Use native Invoke-RestMethod when:
  5. You want to avoid PowerShell module management nightmares.
  6. Running in environments with module conflicts
  7. Need maximum portability across systems.

Never install the full Microsoft.Graph module - it's massive and nearly impossible to update. Only install specific modules, such as Microsoft.Graph.Authentication. The Graph API limits responses to ~100 objects; you must handle pagination for larger datasets. In Azure Automation Accounts, create custom runtime environments with preloaded modules to avoid reinstalling dependencies on every run.

For MSP/Multi-Tenant Environments, use Azure Lighthouse to:

  1. Deploy runbooks across multiple client tenants.
  2. Centrally manage automation at scale.
  3. Configure tenant-specific permissions
  4. Execute scheduled tasks across your entire customer base.

Don't build from scratch. Search the community first. About 80% of automation needs have already been solved by someone else. MVPs, GitHub repositories, and community blogs are goldmines of ready-to-use solutions:

IntuneAutomation.com - 35-40 ready-to-use scripts with "Deploy to Azure" functionality

Open source templates - Remediation scripts, detection scripts, notification templates

Graph X-Ray - Browser extension for automatic code generation

IntuneChange.com - Track and visualize configuration changes over time

Link to slides: https://github.com/mobilejon/WorkplaceNinjasUS/blob/main/DIY%20Intune%20Tools%20PowerShell%20GraphAdmin%20Superpowers.pptx

Break/Side-quests

I met Mona Ghadiri, who introduced me to other MVPs, which led me to connect with them. We all sat around the table, and some gems were shared:

  1. Your Resume Should Tell a Story, Not Check Boxes
  2. ATS Systems Won't Get You Hired, People Will
  3. Stop Performing, Start Being Authentic
  4. Your "Why" Matters More Than Your Certifications
  5. Honor Your Past, Don't Hide It

The last session of day one was the Women in Tech Panel, where I learned the background of Esther Barthel, Mona Ghadiri, Ewelina Paczkowska, and Lavanya Lakshman. One thing I remember from this panel was how Ewelina had signed up for a SQL class in high school. Upon entering the room, there were 20-25 boys, and because it made her nervous, she never took the class because she was the only girl there. In a male-dominated space, I understand how small you can feel as a woman.

Day 2

The day started with the talk ‘The Everywhere Desktop: Secure productivity on any device with the Windows Cloud’, which discussed the use of AI and AVD together. I don’t have any notes for this talk, but here are the PowerPoint slides used.

Link to slides:

https://github.com/mobilejon/WorkplaceNinjasUS/blob/main/Ninja%20US%20Keynote%20-%20Windows%20Cloud.pdf

https://github.com/mobilejon/WorkplaceNinjasUS/blob/main/Ninja%20US%20Keynote%202%20-%20Frontier%20Firms%20-%20Powering%20the%20Future%20with%20AI%E2%80%91Enabled%20Cloud%20PCs%20and%20Windows%20365%20for%20Agents.pdf

Side-quests

I wanted more guidance on my career path and what I want to do, so I decided to book two 1:1 sessions with Fabian Bader and Ugur Koc. I was able to ask Fabian questions ‘What signals differentiate a junior who scripts tasks from a mid-level engineer who designs automation systems?’ and ‘How does he decide which Azure and M365 security controls to automate first in an enterprise environment?” All questions are meant to give me a little insight into how to grow strategically into a mid- to senior-level position. How can I identify intuitively and improve on what I was yesterday? He gave me wonderful advice and made me realize I want to continue moving forward with cloud security, with DevSecOps & Security Automation as my main focus. My talk with Ugur was about how to start creating automation tools like his, and the main thing I should focus on is APIs.

Tenant Tetris: Stacking securely with Microsoft Defender MTO

I had never heard of MTO before this talk, and I remember it because it inspired me and left me with some good nuggets.

Managing multiple Microsoft Defender tenants creates three core challenges: context switching between tenants (like playing multiple Tetris games simultaneously), configuration misalignment across environments, and scale issues when managing dozens or hundreds of tenants simultaneously.

Instead of creating separate incident queues for each tenant, organize your SOC by squad specialization and severity levels. This approach maintains consistent psychological standards of care for all clients and lays the foundation for AI-powered workflow automation.

Remove humans from the permission-granting loop by implementing configuration-as-code for access control. Build multi-layered fail-safe controls using conditional access policies at both the service provider and client tenant levels, combined with unified RBAC in Defender.

  1. Create a centralized repository (monorepo) to manage detections, policies, onboarding procedures, and permissions across all tenants.
  2. Use a parameters database to store tenant-specific variables, preventing hard-coded values in detection logic and enabling scalable change management.
  3. Never hard-code values into detection rules; use parameter files instead.
  4. Be generous with matching logic (use "is not empty" rather than exact values).
  5. Design detections to gracefully handle missing fields or tables when connectors fail.

While MTO provides multi-tenant visibility and shared incident queues, Microsoft doesn't offer operational models for staffing, identity management, or configuration-as-code frameworks. You'll need to build your own blueprints for drift detection, fleet layer intelligence, governance orchestration, and feedback optimization.

When addressing CISO concerns about tenant commingling, emphasize that multi-tenant operations use federation (maintaining autonomy) rather than integration (combining systems). Each tenant retains its own policies, data boundaries, and response action controls.

The same CI/CD pipeline and automation framework can serve multiple use cases: detections, onboarding, policy deployments, permissions management, and change control. This reduces duplicative infrastructure across different IT teams.

Implement automated "cadence engines" (cron jobs) to continuously verify connector health, detection rule integrity, and configuration drift across all tenants because manually logging into 50+ tenants isn't feasible.

The infrastructure patterns for multi-tenant security operations often mirror what identity teams, help desks, and other IT functions need. Presenting unified configuration-as-code approaches can demonstrate cost savings and efficiency gains across the entire organization.

Thank you, Mona, for these gems!

Neourdiversity in Tech

To top it all off, I was able to sit in on a presentation about the highs and lows of being neurospicy and how other neurodivergent folks can navigate them in the tech space. There is nothing to be ashamed of; embrace it. It’s a superpower, not a crutch.

Conclusion

This conference had a lot of great people, amazing talks, and even satisfying food. I really enjoyed myself! If anyone wants to come to the next one, there will be another Workplace Ninjas US conference in Arizona in February 2027. Here are some flicks during the conference!

AI continues its pervasive integration across tech, from hardware and software to marketing and ethical considerations.

2025-12-20 09:35:27

The AI landscape is rapidly expanding, with significant developments across hardware, software, and ethical considerations. This post dives into key updates, including Apple's potential OLED iMac, Google Gemini's new AI mini-app capabilities via Opal, and OpenAI's launch of the ChatGPT App Store. We'll explore how these innovations are democratizing AI development and usage.

Furthermore, we'll examine AI's practical applications in hardware enhancements (Gigabyte, ASUS), smart home technologies (Ring), and marketing strategies (Zara). The discussion also touches upon the challenges of AI-generated content, as seen with YouTube's actions against fake trailers, and delves into the philosophical quandaries surrounding AI consciousness.

AI without the hype: using LLMs to reduce noise, not replace thinking

2025-12-20 09:17:56

This is part 4 of my series on AppReviews.
Part 1 is available here
Part 2 is available here
Part 3 is available here

Grouped reviews per topic

Topic nuage to see easily the most common topics

At some point while building AppReviews, I started thinking about AI.

Not in a “this needs AI” way. More in a quiet, slightly reluctant way.

I already had a system that fetched reviews, pushed them to Slack, and made sure feedback was not missed. That alone solved the core problem. But once reviews are always visible, a new issue appears.

There are a lot of them.

Some are useful.

Some are vague.

Some are emotional.

Some are duplicates of the same issue written slightly differently.

Reading every single review works, up to a point. After that, you are back to scanning, skimming, and mentally filtering noise.

That is when I started asking myself a different question:

What if the system helped me understand reviews faster, without pretending to think for me?

What I did not want

Before adding anything AI-related, I was very clear about what I did not want to build.

I did not want:

  • AI-generated summaries pretending to be insights
  • magic scores without explanation
  • a chatbot answering users on my behalf
  • another dashboard full of charts that look smart but do not help decisions

Most of all, I did not want AI to become the product.

AppReviews exists to shorten the feedback loop between users and product teams.

AI should support that goal, not distract from it.

So the bar was high.

If AI did not reduce cognitive load in a very concrete way, it did not belong.

The actual problem AI helps with

The real problem is not understanding one review.

It is understanding many reviews over time.

When ten users describe the same issue in ten different ways, humans are great at seeing the pattern. But only after reading all ten. That does not scale well when reviews keep coming in.

What I wanted help with was:

  • grouping similar feedback
  • spotting recurring topics
  • getting a rough sense of sentiment trends
  • surfacing urgency when something suddenly spikes

Not answers. Signals.

Why embeddings first, not prompts everywhere

The first building block I added was embeddings.

Every review can be turned into a vector that captures its meaning. Once you have that, you can compare reviews semantically instead of relying on keywords or star ratings.

That immediately unlocks useful things:

  • similar reviews can be grouped together
  • topics emerge naturally instead of being predefined
  • you can detect “this feels like the same problem again”

For embeddings, I use nomic-embed-text. It is fast, local, and good enough for this use case. Each review becomes a 768-dimension vector stored alongside the raw text.

This step alone already adds value, even without a large language model generating text.

Where LLMs come in, carefully

On top of embeddings, I added a second, optional layer using a large language model.

The model I use is llama3.1:8b, running locally via Ollama. This was an intentional choice.

I wanted:

  • no per-token cost anxiety
  • no external API dependency
  • something that could run on my machine or a small server

The LLM is used for very specific tasks:

  • estimating sentiment
  • extracting high-level topics
  • detecting tone such as angry, neutral, or positive
  • flagging urgency when relevant

Each review is processed independently.

No long context.

No agents.

No orchestration complexity.

And most importantly:

This entire pipeline is optional.

If the AI processor is disabled or unavailable, AppReviews works exactly the same. Reviews are still fetched, stored, sent to Slack, and visible in the dashboard.

AI is an enhancement, not a dependency.

Async, isolated, and easy to turn off

From an architectural point of view, AI processing is completely decoupled from the core flow.

  • reviews are saved first
  • only then are they queued for analysis
  • processing happens asynchronously, in small batches
  • failures are retried a few times and then dropped

If Ollama is not running, nothing breaks. There is no user-facing error. The system simply skips analysis.

This was non-negotiable.

AI systems fail in weird ways. None of that should impact the primary job of the product.

Why not “AI-generated insights”

This is probably the question I get the most.

Why not generate summaries like:

“Users are unhappy about onboarding”

or:

“Most complaints are about performance”

The short answer is simple.

I do not trust them.

Those summaries look nice, but they hide uncertainty. They compress nuance into something that feels authoritative, even when it is not.

Instead, AppReviews surfaces raw signals:

  • these reviews are similar
  • this topic appears often
  • sentiment around this feature dropped last week

From there, a human can decide what it means.

AI should help you see where to look, not tell you what to think.

Cost, control, and boring decisions

Running everything locally with Ollama is not the most scalable choice. But it fits the constraints perfectly.

  • no variable costs
  • no surprises
  • no API keys to rotate
  • no privacy questions about sending user feedback elsewhere

If AppReviews grows, swapping the AI backend is relatively easy. The interface is already isolated.

For now, this setup is predictable and controllable.

That matters more than squeezing out the last percentage of accuracy.

What AI does not do, on purpose

To be very clear, AppReviews does not:

  • reply to reviews automatically
  • decide which feedback matters
  • replace reading reviews
  • predict user behavior
  • generate product decisions

AI does not talk to users.

AI does not act on their behalf.

AI does not override human judgment.

It just reduces repetition and helps patterns emerge faster.

The outcome so far

In practice, this approach works surprisingly well.

You still read reviews.

You still reply yourself.

You still make decisions.

But you do it with more context and less noise.

And that is the only promise I am comfortable making.

Start with the human workflow.

Figure out where attention is wasted.

Then see if AI can help reduce that cost.

Not the other way around.

Why Emstrata is the Best AI Storytelling App

2025-12-20 09:07:49

How We Built an AI Storytelling Platform That Actually Works

I storytelling platforms promise collaborative narrative experiences where you and artificial intelligence co-create stories that feel alive and unpredictable. The reality? Most deliver chatbot conversations with a creative writing bent. Characters forget their own names. Locations shift properties mid-scene. The AI manipulates outcomes to serve convenient plot beats instead of genuine simulation.

Emstrata is different. It's an emergent narrative engine built on a four-layer AI architecture designed to solve the core problems plaguing collaborative AI storytelling: consistency failures, fake probability, information leakage, and the systematic replacement of human creative agency with algorithmic convenience.

The Emstrata Cycle: Four-Layer AI Architecture for Narrative Consistency

Most AI storytelling apps use single-layer LLM systems that generate text in response to user input. This approach fails for extended narratives because one system can't simultaneously maintain continuity, manage spatial relationships, generate compelling prose, and track information compartmentalization.

Emstrata's four-layer architecture—The Emstrata Cycle—assigns specialized roles to distinct AI systems:

The Groundskeeper: Persistent Memory and Knowledge Management

The Groundskeeper functions as institutional memory for your simulation. It maintains detailed records of established facts: character descriptions, personality traits, location properties, relationship dynamics, revealed secrets, and world-building elements. When other layers need information about existing entities, they query The Groundskeeper's verified database instead of regenerating from degraded context.

This prevents redescription—the common failure where AI systems inconsistently reframe previously established elements. Your merchant with calloused hands keeps those calloused hands across hundreds of turns. Your cramped tavern basement doesn't mysteriously become a spacious wine cellar. Character personalities remain stable because there's an actual system enforcing consistency.

The Discovery Layer: Spatial Reasoning and Consequence Planning

The Discovery Layer handles what happens when you act. It uses the Simulation Positioning System (SPS), a coordinate-based mapping architecture that tracks where you are and what exists around you. When you open doors, travel to new locations, or search rooms, The Discovery Layer determines outcomes based on established continuity, narrative logic, and dramatic potential—not convenience.

The Discovery Layer also plans ripple effects from participant actions. Antagonize a powerful character? The system tracks that relationship degradation and plans how it might manifest later. Make a deal with consequences? Those consequences get scheduled into future simulation events. This creates genuine cause-and-effect chains instead of episodic encounters that reset between scenes.

The Narration Layer: Prose Generation and Character Interiority

The Narration Layer writes the text you read. It synthesizes structural decisions from The Discovery Layer, continuity requirements from The Groundskeeper, and quality checks from The Chron-Con into flowing narrative prose.

The Narration Layer also generates "Your Eyes Only" sections—private character thoughts and internal reactions invisible to other participants. This creates the experience of inhabiting a character's consciousness, not just observing their external actions. You know what your character thinks about situations, even when those thoughts remain unspoken.

The Chron-Con: Quality Control and Continuity Verification

The Chron-Con (Chronology Context) reviews generated narrative for logical consistency, timeline accuracy, and character behavior coherence. When it detects errors or contradictions, it replaces problematic text with corrected versions before the narrative finalizes.

The Chron-Con also manages character stat tracking (Health, Essence, Tether) and catalogs information revelation, feeding updates back to The Groundskeeper so future narrative generation accounts for changed circumstances. This creates a feedback loop where the simulation's knowledge state stays current with narrative developments.

Genuine Probability Mechanics Instead of Narrative Convenience

AI storytelling platforms commonly suffer from probability punditry—the unconscious bias toward dramatically convenient outcomes. The AI "knows" that finding the hidden item would advance the plot, so you find it. The AI "knows" that character death would be inconvenient, so your risky action succeeds. Every outcome feels predetermined by dramatic logic rather than emerging from genuine simulation.

Emstrata implements explicit probability rolls handled by backend systems, not LLM judgment. When outcomes depend on chance, the platform runs weighted randomness calculations that account for character capabilities, environmental factors, and established context—but remain genuinely probabilistic. The AI reports results; it doesn't choose convenient outcomes.

This preserves collaborative uncertainty. Success feels earned because failure was genuinely possible. Exploration feels meaningful because you can't predict what you'll discover. Risk actually carries weight because the simulation isn't protecting you from consequences.

The Injector System: Structured Unpredictability

The Injector System introduces complications into narratives 20% of the time, creating structured unpredictability that prevents AI storytelling from becoming too convenient or predictable:

Subversive Injectors (15% activation rate) bring environmental complications—weather changes, unexpected arrivals, missing items, circumstances that disrupt plans.

Archetypal Injectors (5% activation rate) introduce new characters who complicate social dynamics and create fresh dramatic tension.

Afterlife Injectors handle character death by transforming it from "game over" into narrative transition, keeping players engaged even after their character dies.

When you see "Emstrata's Turn" instead of your input field, the simulation is taking control to introduce chaos. Sometimes injectors cascade—multiple injectors firing on consecutive turns—creating intense sequences where plans collapse and you can only watch circumstances spiral. This is rare but creates the genuine unpredictability that makes simulation feel real.

Multi-Participant AI Storytelling with Information Compartmentalization

Most AI storytelling platforms struggle with multi-participant scenarios because maintaining separate information streams for different players requires sophisticated state management. Information leaks between player perspectives. Secrets become public knowledge. Private character thoughts appear in shared narrative.

Emstrata handles multi-participant simulations through rigorous information compartmentalization:

Separate Knowledge States: The Groundskeeper maintains distinct knowledge records for each character, tracking what each participant knows independently.

Your Eyes Only Sections: Private character interiority remains invisible to other participants, preserving asymmetric information dynamics.

Spatial Separation: The SPS ensures participants in different locations receive appropriate environmental context based on their position.

Secret Management: Information revelation is tracked precisely, ensuring secrets stay secret until narratively disclosed.

This enables genuine collaborative storytelling where multiple participants pursue separate agendas with different information sets, creating emergent social dynamics impossible in single-player scenarios.

Creative Control Tools for Human Agency

Emstrata prioritizes human creative agency over algorithmic optimization. The platform provides tools that let participants maintain narrative control while leveraging AI capability:

The Invisible Hand

The Invisible Hand lets participants inject narrative elements without taking character action. Want weather to change? Environmental complications to emerge? A specific character to arrive? The Invisible Hand weaves these suggestions seamlessly into the narrative without requiring awkward character behavior to justify them.

The Protest Function

The Protest Function lets participants reject AI-generated content that contradicts established facts or doesn't serve their creative vision. Hit protest, and the system regenerates while accounting for why the previous version was inadequate. This prevents small errors from compounding into narrative collapse.

Orchestrator Mode

Orchestrator Mode gives simulation designers comprehensive control over narrative mechanics. Edit probability parameters before rolls occur. Modify instructions to The Narration Layer. Predefine Injector System interventions. Manipulate simulation state directly. This transforms Emstrata from collaborative tool into authorial instrument for educators, trainers, game masters, and narrative designers.

Pela: The Artistic Discipline of AI-Mediated Narrative Simulation

Pela (Performing Emergent Lives Artistically) represents the artistic discipline enabled by Emstrata's architecture. Like jazz improvisation or live theater, Pela combines structure with spontaneity—participants and AI co-create narratives neither could produce alone.

The best Pela moments happen when collaboration creates unexpected but perfect outcomes, when simulations surprise everyone involved, when you forget you're working with AI and just experience the story. These moments can't be forced, but Emstrata's architecture creates conditions where they become possible.

Pela applications extend beyond entertainment into experiential learning, skills training, scenario planning, and professional development. Want to learn Spanish? Run a simulation where you operate a Barcelona café. Practice difficult workplace conversations? Simulate them with realistic consequences. Understand historical events? Experience them from inside, making decisions in real-time.

Practical Applications for AI Storytelling Technology

Emstrata serves multiple use cases through the same architectural foundation:

Interactive Fiction and Narrative Games: The platform functions as an infinitely flexible game master, generating content on the fly while maintaining world consistency.

Experiential Learning: Complex simulations for language acquisition, historical education, and subject matter exploration through lived experience.

Skills Development: Workplace scenario training, negotiation practice, crisis response preparation, and soft skills development in consequence-free environments.

Collaborative Entertainment: Multi-participant narrative experiences ranging from casual storytelling to ambitious collaborative fiction projects.

Scenario Planning: Organizations can model complex decisions in simulated environments, exploring potential outcomes before real-world implementation.

The same systems preventing character inconsistency in fiction also ensure training scenarios maintain behavioral realism. The same probability mechanics creating unpredictable adventures also make skills training feel authentic. The same information compartmentalization enabling mystery narratives also creates realistic scenario planning with asymmetric knowledge.

Why Architecture Matters for AI Storytelling Apps

The difference between functional and compelling AI storytelling lies in architectural sophistication. Surface-level chatbot interfaces can generate impressive text snippets, but extended narrative experiences require:

  • Persistent memory systems for entity and world consistency

  • Explicit probability mechanics for genuine randomness

  • Information compartmentalization for logical knowledge boundaries

  • Multi-layer coordination for maintaining coherence at scale

  • Human control tools that preserve creative agency

Emstrata's four-layer architecture, genuine probability mechanics, information management systems, and human control tools represent comprehensive solutions to problems other platforms ignore or paper over with impressive-sounding promises.