MoreRSS

site iconThe Practical DeveloperModify

A constructive and inclusive social network for software developers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of The Practical Developer

Beyond the Firewall: Unlocking Trusted Web Search for Agentforce with OpenAI

2026-01-13 09:52:42

Your Agentforce Agent is a genius regarding your CRM data. It knows every Opportunity, Case, and Contact inside your org. But let’s be honest—sometimes, being stuck inside the "Salesforce bubble" limits its potential.

What happens when your Sales Rep asks, "What is the latest news on our competitor's merger?" or "What are the current compliance regulations for AI in the EU?"

Usually, the Agent says: "I don't have that information." 🛑

Connecting an AI to the open internet often rings alarm bells for Architects and CISOs. Is it safe? Is it private? Will it hallucinate?

Today, we’re going to solve this using the standard "Search The Web" action. But we aren't just giving it raw internet access. We are configuring it to use OpenAI as the search provider, routed through the Einstein Trust Layer, ensuring that your Agent's trip to the web is secure, grounded, and enterprise-grade.

Why "Trusted" Web Search Matters

Before we build, it's critical to understand why this isn't just a simple API call.

When you use the out-of-the-box webSearchStream action with Agentforce, the request flows through the Einstein Trust Layer. This means:
Secure Gateway: Your Agent doesn't just "Google it." The query is passed through Salesforce's secure AI gateway.
Zero Data Retention: When utilizing providers like OpenAI via this integration, your search data is not stored or used for model training by the provider.
Grounding: The search results aren't just pasted into the chat; they are used as grounding context for the LLM, reducing hallucinations and ensuring the answer is fact-based.

The Scenario: The "Compliance Assistant" ⚖️

Let's build a practical example where trust is paramount.

Agent's Job: Imagine a Legal/Compliance Agent.
The User: A Legal Officer.
The Request: "Find the latest updates to the California Consumer Privacy Act (CCPA) regarding data retention."
The Challenge: This data changes frequently and lives on government websites, not in Salesforce.
The Requirement: The answer must be accurate and derived from trusted public sources.

Step-by-Step Walkthrough

Prerequisites

  1. Access to Agentforce in your Salesforce Org.
  2. Permissions to edit Agents and Actions.

Note: Ensure your org has the Einstein and Agentforce features enabled.

Step 1: Instruction Engineering 🗣️

We need to give the Agent that uses the web search proper guardrails to ensure it uses this power responsibly. All this happens at the Agentforce Topic level where we will add the Search The Web action.

Below is a reference snapshot of the **Compliance Research topic configuration could look like:**

Topic Classification Description and Scope

Topic Instructions

Example User Input

Step 2: Equip the Action 🛠️

For the created topic, we grab the web search standard action from the library. This is a pre-built capability provided by Salesforce.

  1. Navigate to Setup > Agentforce > Agents.
  2. Open your desired Agent (e.g., "Legal Assistant").
  3. Go to the specific Topic like "Compliance Research".
  4. Click + Add Action and select Add from Asset Library.
  5. Search for "Search The Web".
  6. Click Add.

Step 3: Configure the Trusted Provider (OpenAI) 🔐

This is the critical configuration step. By default, the action might use a standard index, but specifying OpenAI allows for advanced retrieval reasoning.

  1. Click on the newly added Search The Web action to view its properties.
  2. Scroll down to the Configuration section.
  3. Locate the Search Provider section.
  4. For the instructions, you can append the words "using OpenAI" to the existing text.
  5. For the Configuration Value select "OpenAI". If it is grayed out, click the Edit Action button at the bottom.
  6. Save your changes.

Step 4: Testing the Trust Layer 🧪

Let's verify the flow in the Simulator.

User Prompt Input: "Find the latest updates to the California Consumer Privacy Act (CCPA) regarding data retention"

You will see a response like this:

Here is how the Agentforce Reasoning Engine handled it:

How It All Comes Together 🏗️

So, how did we actually solve the requirement without compromising security?

  1. The Mechanics (The Action): The webSearchStream action created a real-time bridge to the outside world. By switching the provider to OpenAI, we upgraded the search from a simple keyword lookup to a semantic query. This allowed the Agent to understand the nuance of "latest amendments" and retrieve highly relevant results from government sites rather than generic blog spam.

  2. The Protection (The Trust Layer): Instead of opening a direct pipe to the internet, every interaction now passes through the Einstein Trust Layer. This layer acts as a secure broker ensuring PII data masking, auditing, and provides security ensuring the Agent does not accidentally summarize harmful content.

  3. The Solution: We successfully transformed an isolated CRM bot into a connected Compliance Assistant. The Agent could answer questions about external laws (CCPA) using internal reasoning, completely solving the "stale data" problem while maintaining enterprise-grade security.

Conclusion

By combining the Search The Web action with the OpenAI provider, you aren't just opening a door to the internet; you are building a secure, transparent window. You get the vast knowledge of the web, filtered through the security and grounding of the Einstein Trust Layer.

Audit your existing Agents. Are they hitting dead ends on questions about public knowledge? Enable trusted web search today to close that gap!

Happy (and Safe) Building! ☁️

Note: The Web Search Results Using OpenAI is in Beta as of Jan 2026.
Refer to the release notes related to this feature for more details.

The Great Tune-Out: Why AI’s Perfect Illusions Might Save Us from Social Media

2026-01-13 09:41:56

We are currently living through the "wow" phase of generative AI. Every day, a new model drops, producing images that are impossibly stylized, hyper-realistic, or deeply unsettling. Video generation is close behind, promising cinema-quality output from simple text prompts. It is a technological marvel.

It is also, quite possibly, the beginning of the end for social media as we know it.

For fifteen years, social media platforms have relied on a fragile currency: trust. Not implicit trust—we know people use filters and curate their lives—but a baseline assumption that the human on the other side of the screen actually went to that restaurant, saw that sunset, or held that opinion.

The deluge of AI-generated media is about to debase that currency into oblivion. We are rushing toward a future where the internet is awash in synthetic reality, and the unintended consequence might be a massive societal "tune-out" and a surprising renaissance of the offline world.

The National Enquirer Effect

To understand the future of an AI-saturated internet, look at the past of supermarket tabloids.

For decades, papers like the National Enquirer have printed headlines screaming about alien babies, two-headed politicians, and miraculous cures. The covers are designed to be visually arresting and instantly gripping. Yet, the vast majority of people walking through the checkout line ignore them.

Why? Because we have collectively categorized them as "entertainment," not reality. They are hogwash. Even if a headline is technically true, the source is so polluted with fabrication that the effort required to verify it isn't worth the return. We have developed a societal filter to tune out the noise.

AI is about to turn Instagram, TikTok, Facebook, and X into the National Enquirer.

When anyone with a smartphone can generate a photo of themselves at an exclusive party they never attended, or create a video of a politician saying something they never said, the "wow" factor quickly curdles into exhaustion.

Social media feeds thrive on engagement rooted in reaction: envy, outrage, inspiration, humor. But those reactions require a belief that the stimulus is real.

  • If you know that the incredible vacation photo you’re envying was prompted by an algorithm in a basement, the envy vanishes.
  • If the viral outrage video is revealed to be a deepfake, the anger turns into cynical apathy.

When everything is spectacular, nothing is impressive. And when nothing is verifiable, nothing is credible.

The Death of "Pics or It Didn't Happen"

For a century, photography was our primary anchor for objective reality. The phrase "pics or it didn't happen" was the internet's golden rule of evidence.

Generative AI is dissolving this anchor. We are entering an epistemological crisis—a crisis of knowledge—where our eyes can no longer be trusted to tell us the truth about the digital world.

The cognitive load of navigating this new internet will become unsustainable for the average person. We do not have the mental energy to fact-check every image, decode every pixel for artifacts, or run every video through deepfake detection software just to scroll through our feeds before bed.

When the cost of verifying reality becomes too high, humans default to skepticism. We will assume everything digital is fake until proven otherwise. And once that threshold is crossed, social media loses its primary utility as a window into other people's lives. It just becomes a window into a never-ending, hallucinated cartoon.

The Offline Renaissance

So, where do we go when the digital square becomes a cacophony of beautiful lies?

We go outside.

If the internet becomes a low-trust environment, the value of high-trust environments skyrockets. The only place where trust can currently be readily established is the physical world.

We may be on the verge of an "Offline Renaissance," driven not by Luddism, but by a desperate craving for authenticity. When you can no longer trust a digital recording of a concert, attending live music becomes a premium experience. When digital art is infinitely replicable by machines, physical crafts made by human hands gain immense value.

We will see a return to analog verification. The handshake deal, the eye contact across a table, the tangible reality of a crowded room—these things cannot be prompted into existence by Midjourney.

The "status symbols" of the future might not be flawless Instagram aesthetics, but verifiable messiness. The flex won't be the perfect digital picture of a meal; it will be the actual stain on your shirt from eating it with friends.

The Great Correction

Social media has spent the last decade pulling us deeper into our screens, leveraging algorithmic addiction cycles. It seemed unstoppable.

It is ironic that the very technology meant to turbocharge content creation—AI—might be the thing that breaks the addiction loop. By flooding the zone with synthetic perfection, AI exposes the emptiness of the infinite scroll.

The depopularization of social media won't happen overnight. It will be a slow fade as users realize they are shouting into a void filled with bots and viewing a world built of pixels and air. Like the National Enquirer at the checkout stand, the feeds will still be colorful, loud, and desperate for attention.

But we just won't be looking anymore. We’ll be too busy living in the real world, where things are messier, harder to capture, but undeniably true.

💬 Discussion

Do you feel your own trust in digital media eroding yet? Are you finding yourself placing more value on in-person interactions as AI content scales up?

How to Compare Two PDF Documents in Java: A Comprehensive Guide

2026-01-13 09:38:12

In the realm of software development, ensuring the integrity and consistency of documents is paramount. Frequently, developers encounter the need to programmatically compare two PDF documents to identify changes, track revisions, or validate content. This task, while seemingly complex, is crucial for various applications, from version control systems to automated quality assurance processes. This tutorial will demystify the process, guiding you through how to effectively compare PDF documents in Java using Spire.PDF for Java, a robust library designed for PDF manipulation. By the end of this guide, you'll be equipped to implement both full document and page-specific comparisons.

Getting Started with Spire.PDF for Java

Before diving into the comparison logic, let's understand why Spire.pdf for Java is a suitable choice and how to set it up in your Java project.

Why Spire.PDF for Java?

Spire.PDF for Java is a professional PDF component that allows developers to create, write, edit, convert, and print PDF documents in Java applications. It supports a wide range of features, including text extraction, image handling, form filling, and, critically for this tutorial, comprehensive document comparison capabilities. Its API is designed to be intuitive, enabling efficient integration into various Java projects for robust PDF processing.

Installation and Setup

To use Spire.PDF for Java, you need to add it as a dependency to your project. The simplest way to do this is via Maven or Gradle.

Maven Dependency:

<repositories>
    <repository>
        <id>com.e-iceblue</id>
        <name>e-iceblue</name>
        <url>https://repo.e-iceblue.com/nexus/content/groups/public/</url>
    </repository>
</repositories>
<dependencies>
    <dependency>
        <groupId>e-iceblue</groupId>
        <artifactId>spire.pdf</artifactId>
        <version>11.12.16</version>
    </dependency>
</dependencies>

If you are not using a build tool, you can download the JAR file directly from the official E-iceblue website and add it to your project's build path.

Comparing Entire PDF Documents

Comparing entire PDF documents involves identifying all discrepancies between two PDF files, including text, images, formatting, and layout changes across all pages. This is particularly useful for version control or auditing complete document revisions.

To perform a full document comparison using Spire.PDF, you typically load both PDF files, initiate a PdfComparer object, and then execute the comparison method. The library can then generate a new PDF document highlighting all the differences.

Here’s a step-by-step guide and a Java code example:

  1. Load PDF Documents: Create PdfDocument objects for both the original and modified PDF files.
  2. Initialize PdfComparer: Instantiate PdfComparer with the two PdfDocument objects.
  3. Set Page Ranges (Optional but Recommended): Define the page ranges to ensure the comparison covers the entire document.
  4. Execute Comparison: Call the compare() method to perform the comparison and save the resulting difference document.
import com.spire.pdf.PdfDocument;
import com.spire.pdf.comparison.PdfComparer;

public class ComparePDFPageRange {
    public static void main(String[] args) {
        //Create an object of PdfDocument class and load a PDF document
        PdfDocument pdf1 = new PdfDocument();
        pdf1.loadFromFile("Sample1.pdf");

        //Create another object of PdfDocument class and load another PDF document
        PdfDocument pdf2 = new PdfDocument();
        pdf2.loadFromFile("Sample2.pdf");

        //Create an object of PdfComparer class
        PdfComparer comparer = new PdfComparer(pdf1,pdf2);

        //Compare the two PDF documents and save the compare results to a new document
        comparer.compare("ComparisonResult.pdf");
    }
}

The output ComparisonResult.pdf will visually indicate the differences between Sample1.pdf and Sample2.pdf. Typically, added content is highlighted in one color (e.g., green), deleted content in another (e.g., red), and modified content might show both. This visual representation makes it easy to quickly identify all changes.

Comparing Specific Pages within PDF Documents

There are scenarios where comparing entire documents is unnecessary or inefficient, especially with very large PDF files. For instance, you might only be interested in changes on a particular page, or a specific range of pages. Spire.PDF for Java facilitates this granular control by allowing you to compare only selected pages.

This approach is beneficial for focusing on specific sections of a document, such as an updated annex or a revised legal clause, without processing the entire file.

The process for comparing specific pages is similar to full document comparison, with a key difference in how you set the page ranges for the PdfComparer.

  1. Load PDF Documents: Same as before, load both PDF files into PdfDocument objects.
  2. Initialize PdfComparer: Instantiate PdfComparer with the two PdfDocument objects.
  3. Set Specific Page Ranges: Crucially, define the exact page numbers or ranges you wish to compare.
  4. Execute Comparison: Call the compare() method to generate the difference document for the specified pages.
import com.spire.pdf.PdfDocument;
import com.spire.pdf.comparison.PdfComparer;

public class ComparePDFPageRange {
    public static void main(String[] args) {
        //Create an object of PdfDocument class and load a PDF document
        PdfDocument pdf1 = new PdfDocument();
        pdf1.loadFromFile("G:/Documents/Sample6.pdf");

        //Create another object of PdfDocument class and load another PDF document
        PdfDocument pdf2 = new PdfDocument();
        pdf2.loadFromFile("G:/Documents/Sample7.pdf");

        //Create an object of PdfComparer class
        PdfComparer comparer = new PdfComparer(pdf1,pdf2);

        //Set the page range to be compared
        comparer.getOptions().setPageRanges(1, 1, 1, 1);

        //Compare the two PDF documents and save the compare results to a new document
        comparer.compare("ComparisonResult.pdf");
    }
}

Comparing specific pages is a more targeted approach. While full document comparison provides a holistic view of changes, page-specific comparison offers efficiency and focus when only certain sections are relevant. This can significantly reduce processing time and resource consumption for very large documents, making it an invaluable tool for targeted document review and validation workflows.

Conclusion

This tutorial has demonstrated how to effectively compare two PDF documents in Java using the Spire.PDF for Java library. We've covered the setup process, followed by detailed examples for both comparing entire PDF documents and focusing on specific pages. By leveraging Spire.PDF, developers can easily integrate robust document comparison functionalities into their Java applications, enabling automated change detection and content validation. These techniques are fundamental for maintaining document integrity, facilitating version control, and streamlining various document processing workflows, offering significant value in diverse programming contexts.

Scalable Architecture Patterns Aren’t Magic — They Just Fix Constraints

2026-01-13 09:21:28

Scalable Architecture Patterns Aren’t Magic — They Just Fix Constraints

A lot of “scalable architecture” advice sounds like a checklist:

Caching. Queues. Replicas. Sharding. Event-driven. Circuit breakers.

The list isn’t wrong — but the mindset often is.

Architecture patterns don’t create scalability.

They remove constraints that prevent systems from scaling.

Once you look at patterns through that lens, they become far easier (and safer) to use.

Why systems actually stop scaling

Most production systems don’t fail because they’re “not modern enough”.
They fail because a specific constraint becomes dominant:

  • A single node hits CPU, memory, or connection limits
  • A database collapses under read load
  • Latency explodes due to slow downstream dependencies
  • Retry storms amplify partial failures
  • One workload starves all others of shared resources

Patterns exist to address these exact problems — nothing more.

Patterns are tools, not upgrades

Each common scalability pattern targets a specific constraint:

  • Stateless services + horizontal scaling

    Remove single-node capacity limits.

  • Caching and read replicas

    Relieve read-heavy databases.

  • Async processing with queues

    Take long-running work off the hot request path.

  • Backpressure and rate limiting

    Prevent systems from overloading themselves.

  • Circuit breakers and bulkheads

    Limit blast radius when dependencies slow down or fail.

  • Sharding

    Unlock write scalability once a single database node becomes the bottleneck.

Used correctly, these patterns buy headroom.

Used blindly, they mostly add complexity.

When patterns backfire

Many scalability incidents aren’t caused by missing patterns, but by misapplied ones:

  • Caching without handling cache stampedes
  • Async queues without idempotency
  • Sharding before understanding access patterns (hello, hot shards)
  • Over-aggressive circuit breakers that reduce capacity during normal traffic

Patterns don’t remove constraints — they move them.
If you’re not measuring, you won’t notice where they reappear.

How scalable systems actually evolve

In real systems, scalability emerges from a simple loop:

  1. Measure real constraints (latency tails, saturation, contention).
  2. Identify where time or capacity accumulates.
  3. Apply the smallest pattern that removes that constraint.
  4. Validate under representative load.
  5. Repeat when the next bottleneck shows up.

Scalability isn’t an architecture choice.

It’s an ongoing constraint-management process.

Final thought

If you remember only one thing:

Patterns don’t make systems scale.

Fixing constraints does.

Treat architecture patterns like surgical tools, not decorations — and your system will scale when it actually needs to.

Want real-world examples?

If you prefer concrete before/after work over theory, there are a number of case studies covering performance bottleneck isolation, production scalability improvements, and measurable outcomes under real load here:

https://optyxstack.com/case-studies

SmoothUI: 40+ Animated React Components with Motion

2026-01-13 09:19:42

SmoothUI: a React component library with Motion-powered animations.

The library includes 40+ components like animated accordions, dropdowns, modals, and text effects.

You install components individually through shadcn CLI rather than importing a full package.

Each component ships with TypeScript definitions, dark mode support, and Tailwind-based customization.

Works directly with Next.js and standard React projects.

👉 Blog Post

👉 GitHub Repo

👉 Browse All Components

NodeJS &amp; MongoDB API revisited — Following the MVC Pattern

2026-01-13 09:17:33

The MVC pattern (Model, View, Controller) is popular amongst backend programming. It was first introduced to me in the FlatIron School when I was learning how to use Ruby/Sinatra/Ruby on Rails. To breakdown this patter you have…

Model — The object you are creating with its attributes. An example would be a word model. A word is the “thing” and it has attributes like definition, synonyms, origin, etc…

View — This is what the user sees. It’s the view. Displays the information.

Controller — The logic of the backend. It decides what to do for each action for each HTTP Verb. DELETE controller action will delete the instance of the object. CREATE will make one. GET with id will find a show a specific instance. GET with all with show all instances. UPDATE or PATCH or PUT will update/change the instance attributes that are desired. The controller decides how this will all work.

Here I am going to explain how to setup a MVC style NodeJS/MongoDB API. Hopefully this will also show you why this format is better for readability and organization of code.

File Formatting — routes.js
First you’re only going to have a single routes.js file rather than a route file for each model. Mine isn’t finished yet but it looks something like this:

You can see for each HTTP verb action you have a call to the UsersController which has a corresponding function to be called. Ex: The POST route (ln 14 router.post) calls the store function in the UsersController to create the user. This then leads us to the controller…

File Formatting — Controllers Directory
In your server directory you’re gong to create a “controllers” directory (projectdir/server/controllers). Then you should make a controller for each model so we have a UsersController for the User model.

Here is a snippet of my UsersController. You’re going to create one of these async functions for each Action. This specific “index” function will pull all the users from the database and display their corresponding comments. If the controller has any trouble than the error message will be displayed with the error code status 500 (Which means there is an internal server error which would be the creators fault, not the user).
You’re going to make each of these functions for each action and display the data you want to show.

File Formatting — Models Directory
Just like before you’re going to have a directory for models (projectdir/server/models). Each model file will have the schema and will be the same as I showed in my last post. If you need a refresher it will look something like this:

That’s it!
I hope this quick guide helped! These small changes make your code organized in a better way and help readability! I can’t leave a link to this code repo because it is private but I will leave a link to the repo that helped me understand these concepts here. I’d highly recommend you checking it out! Next I’ll address the model relationships for mongoDB using the same API structure so stay tuned! Hope this helped!

Happy Coding! :D