2026-01-14 18:00:04
Remote work managers are extremely stressed in 2026… but why? As the keepers of productivity, it’s important to know the team is working but not feeling watched. Monitoring software, screenshots every ten minutes, activity percentages, app usage logs, have been the answer in the past. What happened? Employees felt surveilled, trust eroded quietly, and workplace dynamics became parole, not a partnership.
But, what if all that tracking data was ineffective?
Time tracking has evolved.
First, the punch clock. Physical, simple, binary. You were either at work or you weren't. The question it answered: how many hours did you work?
The second phase arrived with remote work and digital tools. Screenshots captured screens at random intervals. Software logged which applications were open, which websites were visited, how much the mouse moved. The question expanded: what were people doing during those hours?
This phase solved visibility. It also created backlash. Employees reported feeling anxious, distracted by the awareness of being watched. Some companies found that monitoring increased activity metrics, while actual output stayed flat—people learned to perform busyness rather than produce work.
Now, instead of collecting more data, tech can interpret the data that already exists. So, what does this pattern actually mean?
Traditional monitoring allows a manager to scroll and spot obvious issues, like someone who is inactive for hours, playing on social media. But, is this team member burning out? Is this project taking longer than it should? Is someone disengaged, or just working differently?
The data is the key. Answering manually means cross-referencing hours, comparing across weeks, noticing patterns that only emerge over time. Most managers don't have that time. So the insights stay buried.
AI changes things with pattern recognition across thousands of data points. It flags when someone has worked 50+ hours for three consecutive weeks, or a project is taking twice as long as similar past projects, or that a team member's activity patterns have shifted with disengagement.
Instead of watching people, the system watches patterns and surfaces what counts.
WebWork Time Tracker launched its AI features in January 2025.
The product was born in 2016 with a small team and a basic MVP, time tracking with screenshots. No outside funding. Growth came from the product working well enough that customers spread the word.
The platform expanded steadily. App and website monitoring. Project management. Team chat. Attendance tracking. Payroll processing. Timesheets and approvals. Integrations with tools like Deel, Stripe, PayPal, and Payoneer. Shift scheduling. PTO management.
By the time WebWork Time Tracker Inc. was incorporated in 2022, monthly recurring revenue had reached $25,000, entirely bootstrapped.
By the time AI launched, WebWork had spent eight years building everything around it. The platform serves over 26,000 businesses. The AI has real data to analyze, complete workflows from time tracking through payroll, across thousands of companies and millions of hours.
Vahagn Sargsyan, WebWork's founder and CEO, says,"We built what businesses actually needed, tested it with real users, and kept improving. The AI has something real to analyze."
WebWork's AI features focus on interpretation. The AI layer asks what the data means.
Burnout detection identifies patterns suggesting overwork, not just long hours in a single week, but sustained patterns over time that correlate with declining performance or eventual turnover.
Workload imbalance alerts flag when certain team members consistently carry heavier loads than others, often invisible in day-to-day management but obvious in aggregate data.
Attendance pattern analysis spots irregularities that might indicate disengagement like shifts in login times and changes in activity rhythms.
The system generates summaries and suggestions without requiring managers to dig through dashboards.
Sargsyan continues, "We built monitoring features because clients asked for them. But what they actually needed was understanding. Not more screenshots, but answers to questions like 'why is this project behind?' or 'who on this team is at risk of burning out?' The data was always there. We just weren't interpreting it."
The platform offers screenshots that can be enabled, disabled, or blurred for privacy. App and website tracking can be turned on or off. This flexibility sidesteps the debate over whether monitoring is acceptable. Instead, it puts the decision with each company.

In early 2025, Sargsyan published "Builder's Time: The Blueprint for Creators, Leaders, and Teams to Master Time." The book explores how individuals and organizations misperceive time, why productivity often means motion without progress, and how to design systems that protect meaningful work.
The book argues that time is something you design rather than something you manage, a resource that flows through people, teams, and products in patterns that can be improved.
"Builder's Time came from watching how teams actually use time data and how often they misuse it." Sargsyan explains. “WebWork is the system that supports it."
Five years from now, I don't think managers will review screenshots. They'll ask their system questions: 'Is my team healthy? Where are we losing time? Who needs support?’
Whether that shift reduces the tension between visibility and trust remains uncertain. Employees may feel differently about AI analyzing their patterns than humans reviewing their screenshots or they may not. The surveillance concern doesn't disappear just because the surveillance gets smarter.
What does change is the value proposition for employers. Monitoring that produces insights might justify itself differently than monitoring that produces compliance. If AI can identify burnout before it causes turnover, or inefficiency before it delays projects, the ROI calculation shifts from "catching problems" to "preventing them."
WebWork's bet is that eight years of building the foundation and 26,000 businesses worth of data positions it to deliver on that promise.
2026-01-14 17:30:04
As global commerce becomes increasingly decentralized, the ability to communicate clearly across borders and languages has become a prerequisite. But cross-border communication often suffers from language friction, slow replies, unnatural phrasing, and inappropriate tone, all of which can make teams look unprofessional and reduce trust. TranGPT solves exactly this problem.
TranGPT exists to remove language frictions and make multilingual communication feel natural, fast, and professional. This social translation app is designed to help professionals communicate effectively and efficiently without language barriers.
It’s fair to say that TranGPT lies at the intersection of AI customer service, social translation, and sales enablement software. But it’s not an automated sales tool. It’s a capability amplifier that supports human professionals by improving how they communicate. This ability is extremely useful in cross-border sales, where trust, speed, and responsiveness directly influence conversion outcomes.
TranGPT is not meant to replace human decision-making; it’s designed to improve it through AI-assisted responses aligned with business intent and brand tone.
AI multilingual customer service system
This feature enables real-time AI-assisted responses across languages, helping teams reply to customer inquiries quickly with accuracy and professionalism. Responses preserve intent and tone rather than offering mechanical, unnatural translations.
Social platform translation and reply assistance
In private messaging environments, such as social media DMs or chat-based sales channels, TranGPT translates incoming messages and gives optimized reply suggestions. This enables sales teams to communicate naturally with international prospects without sacrificing speed or confidence.
Sales script optimization and decision-support tools
TranGPT analyzes communication context and provides suggestions to improve clarity, persuasion, and consistency. Instead of offering rigid scripts, it serves as a support layer that adapts to different stages of the conversation, from initial contact through negotiation to follow-up.
Global coverage at scale
TranGPT supports language translation across more than 180 countries and regions. Currently, the platform supports major global languages, including English, Chinese, Japanese, French, German, and Spanish, as well as many others used in cross-border trade.
Speed matters in sales and customer service. TranGPT delivers translation at near-instant speed using advanced AI models. Messages appear almost immediately, keeping conversations fluid instead of delayed. This helps teams respond while interest remains high.
Accuracy matters even more. TranGPT does not rely on rigid literal translation. It analyzes sentence structure, intent, and tone. The output reads natural and professional, not mechanical. Business emails, customer inquiries, social messages, and documents all benefit from this approach.
TranGPT is not just about translating text. It provides recommendations based on a conversation context. Incoming messages cause the AI to generate relevant responses that match the topic, tone, and phraseology of the conversation. This feature is especially useful in customer service and sales chats. Rather than starting replies from scratch or massaging clunky translations, teams get polished responses that sound human and confident.
TranGPT also allows tone modification and optimization. Responses can sound polite, friendly, professional, or firm depending on the situation. This flexibility helps maintain brand consistency across different markets and platforms.
TranGPT includes an automatic memory update system. The AI adapts to usage patterns, preferred wording, and communication style over time. As usage increases, translations and responses become more aligned with internal standards and habits. This personalization improves fluency and reduces repetitive corrections. Teams spend less time editing AI output and more time engaging with customers. The system improves quietly in the background, without requiring complex configuration.
Terminology management further strengthens this capability. TranGPT extracts and manages key terms from translations, ensuring consistent use of product names, industry phrases, and brand language across conversations.
Many cross-border interactions now happen in private messages rather than email. TranGPT supports social platform translation and reply assistance, making it easier to manage conversations across channels.
Messages, replies, and ongoing chats remain organized and readable. AI assistance helps manage accounts, incoming messages, and follower interactions more efficiently. This turns multilingual social communication into a manageable workflow instead of a daily bottleneck. Fast processing allows multiple conversations to move forward at once. This reduces response backlog and improves engagement quality.
TranGPT includes grammar checking and correction for English and other supported languages. The system detects errors and offers improved phrasing suggestions. This function works especially well for non-native speakers who understand the message but struggle with structure or tone. It ensures communication remains polished without slowing down response time.
In addition, TranGPT features a built-in bilingual dictionary that also supports language understanding and clarity. Words and phrases merge naturally between languages, ensuring precision and meaning.
TranGPT boasts a simple, clean, and intuitive interface. Its core functions are easy to access without training overhead. The AI translator is scalable, which makes it fit for growing teams and increasing message volume. It can handle business or team expansion without sacrificing speed or quality.
In addition, independent deployment options and complete data isolation ensure privacy and control. Multiple encryption layers protect sensitive business communication. Plus, TranGPT supports multiple output formats, including text, PDF, and images, adding flexibility for different business needs.
\ Official website: https://www.trangpt.ai
X (formerly Twitter): https://x.com/TranGPT_AI
Telegram: https://t.me/TranGPT_HK
2026-01-14 15:10:58
How are you, hacker?
🪐Want to know what's trending right now?:
The Techbeat by HackerNoon has got you covered with fresh content from our trending stories of the day! Set email preference here.
## Back to Basics: Database Design as Storytelling
By @dataops [ 3 Min read ]
Why great database design is really storytelling—and why ignoring relational fundamentals leads to poor performance AI can’t fix. Read More.
By @zbruceli [ 18 Min read ] A deep dive into the Internet Archive's custom tech stack. Read More.
By @proofofusefulness [ 8 Min read ] Proof of Usefulness is a global hackathon powered by HackerNoon that rewards one thing and one thing only: usefulness. Win from $150k! Read More.
By @drechimyn [ 7 Min read ] Broken Object Level Authorization (BOLA) is eating the API economy from the inside out. Read More.
By @tigranbs [ 9 Min read ] A deep dive into my production workflow for AI-assisted development, separating task planning from implementation for maximum focus and quality. Read More.
By @akiradoko [ 20 Min read ] A roundup of 10 standout C and C++ bugs found in open-source projects in 2025. Read More.
By @proflead [ 4 Min read ] Ollama is an open-source platform for running and managing large-language-model (LLM) packages entirely on your local machine. Read More.
By @kilocode [ 6 Min read ] CodeRabbit alternative for 2026: Kilo's Code Reviews combines AI code review with coding agents, deploy tools, and 500+ models in one unified platform. Read More.
By @mohansankaran [ 10 Min read ] Jetpack Compose memory leaks are usually reference leaks. Learn the top leak patterns, why they happen, and how to fix them. Read More.
By @zbruceli [ 21 Min read ] Groq’s Deterministic Architecture is Rewriting the Physics of AI Inference. How Nvidia Learned to Stop Worrying and Acquired Groq Read More.
By @dmtrmrv [ 10 Min read ] Start with markup, not styles. Write only the CSS you actually need. Design for mobile first, not as a fix later. Let layouts adapt before reaching for breakpoi Read More.
By @superorange0707 [ 7 Min read ] Learn prompt reverse engineering: analyse wrong LLM outputs, identify missing constraints, patch prompts systematically, and iterate like a pro. Read More.
By @erelcohen [ 4 Min read ] Accuracy is no longer the gold standard for AI agents—specificity is. Read More.
By @jonstojanjournalist [ 3 Min read ] Ensure your emails are seen with deliverability testing. Optimize campaigns, boost engagement, and protect sender reputation effectively. Read More.
By @companyoftheweek [ 4 Min read ] Meet ScyllaDB, the high-performance NoSQL database delivering predictable millisecond latencies for Discord and hundreds more. Read More.
By @normbond [ 3 Min read ] When teams move fast without shared meaning, quality dissolves quietly. Why slop is a symptom of interpretation lag, not a technology failure. Read More.
By @companyoftheweek [ 4 Min read ] Ola.cv is the official registry for the .CV domain, helping individuals to build next-gen professional links and profiles to enhance their digital presence. Read More.
By @djcampbell [ 6 Min read ] Is AI good or bad? We must decide. Read More.
By @manoja [ 4 Min read ] A senior engineer explains how AI tools changed document writing, code review, and system understanding, without replacing judgment or accountability. Read More.
By @scylladb [ 6 Min read ]
ScyllaDB offers a high-performance NoSQL alternative to DynamoDB, solving throttling, latency, and size limits for scalable workloads. Read More.
🧑💻 What happened in your world this week? It's been said that writing can help consolidate technical knowledge, establish credibility, and contribute to emerging community standards. Feeling stuck? We got you covered ⬇️⬇️⬇️
ANSWER THESE GREATEST INTERVIEW QUESTIONS OF ALL TIME
We hope you enjoy this worth of free reading material. Feel free to forward this email to a nerdy friend who'll love you for it.
See you on Planet Internet! With love,
The HackerNoon Team ✌️
.gif)
2026-01-14 13:53:57
\ Crypto presales are no longer judged the way they were a few years ago. The change did not come from a single market crash or regulatory decision. It came from experience. Investors, builders, and observers have simply seen enough outcomes to recognize patterns that were invisible before. What once passed as ambition is now often read as overreach, and what used to look slow can now signal discipline.
In earlier cycles, the size and speed of a raise were treated as validation. Large numbers created confidence, even when there was little clarity around delivery. That logic has weakened. Today, presales are increasingly evaluated on whether the structure of the raise makes sense in relation to what the team is actually capable of building. Smaller, phased funding is no longer a weakness. In many cases, it is interpreted as restraint. Some crypto presale projects, such as Hexydog, reflect this shift by framing their presale around defined utility and measured scope rather than relying on aggressive fundraising narratives.
Another quiet change is how utility is interpreted. It is no longer enough to promise future use cases. The question has shifted to whether a token has a clear role that can exist independently of price movement. Projects that can explain where a token fits once speculation fades tend to retain attention longer, even if adoption is slow. Utility has become less about excitement and more about coherence.
Looking back, some early projects unintentionally established standards that are now used to evaluate new presales. Their success was not just timing. It was alignment between funding, structure, and execution.
Ethereum raised funds in 2014 with a clear technical vision and a limited scope relative to its ambition. The presale did not promise immediate dominance. It focused on building a base layer and allowed the ecosystem to grow organically around it.
Chainlink conducted its ICO in 2017 with a narrowly defined purpose: decentralized oracles. The token had a specific function tied directly to network usage, which made progress measurable long after the ICO phase ended.
Filecoin raised significant capital in 2017, but paired it with explicit technical milestones and delayed distribution mechanics. The structure was complex, but it reflected the complexity of what was being built rather than marketing pressure.
Experience also comes from failure. Many projects followed the opposite path: aggressive fundraising, vague utility, and timelines disconnected from reality. These outcomes now shape investor skepticism.
BitConnect attracted massive attention and capital, but its presale and token model were built on unsustainable incentives rather than real functionality. Once confidence cracked, there was nothing underneath to support the system.
Centra Tech raised millions by marketing partnerships and utility claims that later proved false. The presale narrative collapsed because it was not backed by real infrastructure or execution.
Substratum promised decentralized web access and raised substantial funds, but struggled to deliver a working product. Over time, the gap between funding and progress became impossible to ignore.
The shift in how crypto presales are evaluated is not ideological. It is practical. The market has learned to separate narrative from structure and ambition from feasibility. Presales that acknowledge limits, define utility clearly, and align funding with execution are no longer seen as conservative. They are seen as realistic.
This change does not eliminate risk, but it changes where credibility comes from. In the current environment, trust is built less by how loudly a project launches and more by how quietly it continues to work once the presale ends.
\
2026-01-14 13:53:52
\ About a month ago, Elon Musk boldly stated that “work will be optional” while discussing the future impact of AI on the workforce. While I assumed this was to evoke emotion in the general public, I felt stirred to explore this further.
As adults, many of us are focused on just fulfilling basic needs of food, shelter, and safety. Psychologist Abraham Maslow’s hierarchy of needs states that we reach our full potential when we go beyond basic needs and reach the fifth level of self-actualization, in which we achieve our full potential, working and creating for our own purposes. This concept is crucial in relation to how I feel about work. I believe work is core to who we are as human beings. Work is not a burden or a curse, but what gives life purpose and drives people to greatness. And an essential element of work is creativity—once our creativity is unleashed, we feel more fulfilled. As humans, we’re meant to create—new recipes, buildings great and small, video games, furniture, space rockets, and paper dolls. We’ve even experimented as children, mashing up food and doodling in our notebooks.
In the Book of Genesis, humankind is described as being created in God’s image, reflecting our innate desire to work and create. Its importance could not be emphasized more—through the very fact that the Bible and Torah begin with “In the beginning, God created…” (emphasis mine). From a nonreligious perspective, Aristotle described people as makers and often used the term techne to characterize their goal-oriented approach to work. To work and create is in our nature.
Our firm, SparkLabs Group, is an investor in OpenAI, Anthropic, xAI, and more, so we’re no strangers to AI. We hear the voices of concern, tempered prognostications, and bold statements by people such as Musk. Fearmongering attracts the media outlets, so the more common voices of doomsayers are often repeated: AI will eliminate 50% of white-collar jobs within five years. AI will disrupt and cause half of the Fortune 500 companies to disappear. AI will result in the gradual disempowerment of humankind.
I believe that AI’s impact and future pathways are overstated because human nature is ignored in such statements. Our actions as humans are not simply driven by cause and effect, but by deep psychological needs, such as our need for a purpose. As Dostoevsky wrote in Notes from a Dead House (his fictionalized account of his years in a Siberian labor camp), the worst possible punishment would be to make people do utterly useless work. This was proven in Nazi concentration camps, as told by survivors such as Eugene Heimler. In A Link in the Chain, Heimler described how, after a factory that made equipment for the German military was bombed, the camp’s commander made them move rubble from one end of the camp to the other. Days and weeks as this exercise continued, people began to commit suicide or were driven to madness. Why? For many reasons, I’m sure—one of them being that their psychological needs, their desire for purpose as human beings, were ignored. At least when they worked at a factory, even if they were morally opposed to or repulsed by the circumstances of their forced labor, such work served a purpose. Without a purpose, people were driven to insanity.

\ Innovation spurs more work and creativity. From the printing press to electric power, the internal combustion engine, integrated circuit, computers, and the Internet, jobs were not lost—they were created. New industries were formed, serving as stepping stones to further innovations. In truth, while we may experience short-term job loss in certain sectors, AI is itself a stepping stone to the next wave of incredible innovations that will create new jobs and expand our boundaries of knowing the unknown.
If anything, work will not become optional, but an option for more people. Yes, robotics and AI will replace many physical, mundane tasks, but they will allow the hidden Einsteins, Zuckerbergs, and Musks to flourish, to spend more time creating and realizing the unlimited potential in their minds rather than being trapped in the slog of basic economic needs. And it won’t just be some people who benefit—almost everyone will be elevated to new levels of intelligence and creativity.
During these short thirteen years as a venture capitalist, one thing I have learned is that the beauty of entrepreneurship reveals no end to human creativity and innovation. Every year, tens of thousands of tech startups and hundreds of thousands of small businesses are launched across the globe. I sometimes wonder how many more fashion startups can possibly come out of South Korea, or how many more new enterprises can come out of Silicon Valley after decades of iterations, and then what happens? Another one launches. Human nature prevails.
I believe AI will become a tool of innovation, one that enhances and accelerates tomorrow’s entrepreneurs. Twenty years from now, the next wave of major innovations won’t be focused on AI, but on something else just as awe-inspiring that gets us talking and, more importantly, gets us creating.
*This article was not written at all with the assistance of any AI tool or platform.
\
2026-01-14 13:36:36
Go developers regularly deal with warnings from the built-in static analyzer. But what if you need more features or want to find something specific in your project? Go provides powerful tools for parsing and analyzing code. In this article, we'll talk about them and even create our first diagnostic rule.
We've got experience with Go tools for parsing code when we've started developing our own static analyzer in the Go language.
This article will be useful for beginners in development. So, let's start with the basics.
Static analysis is an analysis methodology that doesn't require code execution. Static analyzers can help detect errors early in development, and the list of things they can detect is practically endless: incorrect comparisons, unreachable code, potential panics, memory leaks, and much more.
The Go ecosystem has a built-in analyzer, go vet, and a collection of linters bundled in golangci-lint. These tools work quickly, connect easily to CI, and are familiar to every Go developer. Therefore, if you're not creating your product with its ecosystem (as we do), it would be better to integrate diagnostic rules or analyzers into your workflow.
If you think the capabilities of existing linters are insufficient (you need to check for project-specific errors, monitor internal contracts, or you're simply confident that you can create a better diagnostic rule than the existing one), in such cases, you can write your own analyzer. The standard library provides access to everything you need: syntax trees, type information, and file location. On top of that, there is a convenient go/analysis framework, creating full-featured linters without routine.
In this article, we'll show a step-by-step look at how analysis tools work in Go, which packages are used to work with code and semantics, how to run your own analyzer, and how to integrate it into the overall linter system.
Go provides a complete set of tools for parsing and analyzing source code. These packages are included in the standard library or located in the official golang.org/x/tools repository. Let's go through all the ones we need.
The go/ast package describes the structure of an abstract syntax tree (AST). It represents the structure of the program: expressions, operators, and declarations. It doesn't contain information about variable types where the object the identifier refers to—it only captures the syntactic form. This is one of the main packages you need to work with, since analyzers operate on AST nodes in one way or another.
The AST shows how the code is represented, but doesn't indicate what the identifier means. The go/types package does this. It enables you to:
ast.Ident;The main result of the work is the types.Info structure. It links AST nodes to types and objects. This package is needed when the information from the tree is insufficient to understand the error.
The go/token package helps process information about positions in a file: lines, columns, and shifts.
golang.org/x/tools/go/analysis is the official framework for creating diagnostic rules. It abstracts routine tasks, such as parsing, type checking, sharing data between analyzers, and report generation.
The analysis.Analyzer structure is a key element. It specifies:
Run function that performs the actual checks;The framework also provides supplementary packages:
analysis/singlechecker for running a single analyzer;analysis/multichecker for running multiple analyzers;analysis/passes/... for providing ready-made passes, for example, type checking and detecting the printf-like functions.The go/parser package can turns Go source code into the ast.File tree. It's needed when you manually create your analyzer. If you use the analysis framework, it parses itself.
The go/packages is used to load packages, an AST, and type data. It indicates how and which packages to load, such as modules, build tags, and the environment. It can load the entire set of packages with dependencies at once. The analysis framework already uses package loading mechanisms internally, so go/packages isn't usually required when writing diagnostic rules. But if you're writing your own tool, go/packages is almost always more convenient than a manual parser + types.
We won't use it in this article, but it is important to mention it. SSA (Static Single Assignment) is an intermediate representation similar to low-level pseudocode. SSA allows you to solve data-flow analysis problems, perform interprocedural checks, and build more complex diagnostic rules (for example, checking for possible nil dereferencing).
A typical analyzer uses the following combination:
go/ast) finds the required tree fragment;go/types) precisely defines what kind of variable or function it is;analysis combines everything into one diagnostic rule and integrates it into linters.Next, we'll take a closer look at how an analyzer actually works and how these packages interact with each other.
The go/analysis framework organizes diagnostic rule execution so that you don't have to manually load packages, parse files, build type information, or handle many other things. It does this automatically and passes a pre-prepared set of data into your diagnostic rule: AST, types, file, and location.
When an analyzer runs via singlechecker, multichecker, or, for example, as part of golangci-lint, the first step is:
go/types.The output forms a Pass structure, an object containing everything the diagnostic rule needs. We'll examine it in more detail in the section on the analysis framework.
Each analyzer describes its dependencies through the Requires field. For example, it can depend on nilness. The framework evaluates the correct execution order, runs dependencies, and passes their results to your diagnostic rule via pass.ResultOf.
The core of checks is located in Run(pass *analysis.Pass) (any, error).
Typically, the scenario for how a diagnostic rule works within a method looks like this:
1. Get a list of files:
for _, file := range pass.Files {
// traverse the AST and search for the desired constructs
}
Enter fullscreen mode Exit fullscreen mode
2. Use pass.TypesInfo to specify the type of variables, functions, and expressions.
3. If a problem is found, you need to call:
pass.Reportf(expr.Pos(), "message")
Enter fullscreen mode Exit fullscreen mode
The framework will collect the results, output them in the command line, or pass them on if the analyzer is running within other tools.
An abstract syntax tree (AST) is a structured program representation that shows what structures the code consists of: declarations, statements, expressions, and types. When working with an AST, the analyzer interprets the code not as text, but as a well-organized structure.
After parsing, the file is converted into a tree, where each node is a specific language element. For example:
ast.FuncDecl represents a function declaration;ast.AssignStmt represents an assignment statement;ast.CallExpr represents a function call;ast.BinaryExpr represents a binary expression;ast.Ident represents the name of a variable or function.The AST is always used because it's the foundation of all checks. If you need to find function calls, you look for ast.CallExpr nodes. If you need to check how a variable is used, you search for ast.Ident.
Let's take a look at a small code fragment:
x := a + b
Enter fullscreen mode Exit fullscreen mode
You can use an auxiliary visualizer.
For this code snippet, the AST looks like this:
In the tree, pointers link these nodes, and the analyzer can freely move from one node to another. Since the AST uses structural type checking, so it's sufficient to check the node type:
if call, ok := n.(*ast.CallExpr); ok {
// function call found
}
Enter fullscreen mode Exit fullscreen mode
You can handle assignments, operations, conditions, and any other constructs in exactly the same way. For example, AssignStmt.Lhs and AssignStmt.Rhs contain the left and right sides of the statement.
Remember that the tree itself doesn't store data about variable types or constant values. We'll talk about it in the next section dedicated to go/types.
The go/types package lets the analyzer understand what type an expression has, where an identifier refers to, and whether operations are used correctly. It bridges the gap between the syntax tree and real program entities: variables, functions, types, and methods.
Let's say there is an expression in the code:
x := y + z
Enter fullscreen mode Exit fullscreen mode
The AST will only say that this is a binary expression with the + operator, but it won't say what types y and z have, where the y variable is declared. That information comes from go/types. The result of go/types is types.Info.
Look at the most important fields:
Types stores the type of each expression;Defs shows where objects are defined;Uses shows where objects are used.Thus, any identifier can be expanded to a specific object (variable, function, or field) along with its type.
To find out the type of an expression, you can use the following code:
if tv, ok := pass.TypesInfo.Types[call]; ok {
fmt.Printf("Expression type: %s\n", tv.Type.String())
}
Enter fullscreen mode Exit fullscreen mode
The identifier contains only the name (x), but doesn't know what it's bound to. Semantics makes this connection:
obj := pass.TypesInfo.ObjectOf(ident)
Enter fullscreen mode Exit fullscreen mode
obj can be:
types.Var);types.Func);types.Const);types.TypeName);types.PkgName).The golang.org/x/tools/go/analysis framework is a fundamental part of the static analysis in Go. It eliminates routine and provides a convenient structure for writing your own diagnostic rules. With it, you can quickly create an analyzer, integrate it into other tools, and run it on real projects.
Each diagnostic rule is represented as an object of the analysis.Analyzer type. It describes everything you need:
Run function that contains the check;This transforms analyzer units of analysis that can be combined, merged into a single binary file, and connected to linters.
The simplest analyzer looks like this:
var Analyzer = &analysis.Analyzer{
Name: "mycheck",
Doc: " Description of the rule",
Run: run,
}
Enter fullscreen mode Exit fullscreen mode
Where run is a verification function:
func run(pass *analysis.Pass) (any, error) {
// verification logic
return nil, nil
}
Enter fullscreen mode Exit fullscreen mode
The framework passes the analysis.Pass object into your diagnostic rule. It contains all the necessary information:
Files contains a list of AST files (ast.File) ready for traversal;TypesInfo contains a type information (types.Info) for expressions in the tree;Pkg contains a current package;Reportf and Report contain methods for outputting errors;Fset contains a position table (token.FileSet);ResultOf contains data from other analyzers if the diagnostic rule has dependencies.To report a problem, use the pass.Reportf method. For example:
pass.Reportf(call.Pos(), "Undesirable function call %s", funObj.Name())
Enter fullscreen mode Exit fullscreen mode
The framework provides two convenient packages, singlechecker and multichecker.
singlechecker is used when you have a single rule:
package main
import (
"golang.org/x/tools/go/analysis/singlechecker"
)
func main() {
singlechecker.Main(Analyzer)
}
Enter fullscreen mode Exit fullscreen mode
After build, we can run:
go vet -vettool=./mycheck ./...
Enter fullscreen mode Exit fullscreen mode
multichecker is used when there are many rules. They can be combined into a single tool:
multichecker.Main(
rule1.Analyzer,
rule2.Analyzer,
rule3.Analyzer,
)
Enter fullscreen mode Exit fullscreen mode
The advantages of analysis are quite obvious:
go vet and golangci-lint;Your analyzer is valuable if it's easy to run. The Go ecosystem is designed so that any analysis tools can be seamlessly integrated into familiar workflows: run via go vet and used in CI. Let's see how that works.
go vet is a Go's built-in static analysis mechanism. It can load external analyzers when they're provided as a vettool.
To connect your analyzer, first build it:
go build -o mycheck ./cmd/mycheck
Enter fullscreen mode Exit fullscreen mode
Now you can run the analyzer:
go vet -vettool=./mycheck ./...
Enter fullscreen mode Exit fullscreen mode
Here's what you get:
mycheck will operate as a fully integrated part of go vet;This is a convenient way to distribute a simple set of rules.
The analyzer can also be integrated into golangci-lint. You can read more about this in the documentation and see an example.
If you've built the analyzer using singlechecker or multichecker, you can also run it directly:
./mycheck ./...
This option is used:
Let's look at the simplest example to see how a Go analyzer rule is written.
Diagnostic rule should find empty if blocks like that:
if cond {
}
Enter fullscreen mode Exit fullscreen mode
The diagnostic rule uses the go/ast package to traverse the syntax tree.
Look at the code in the main.go file:
package main
import (
"emptyif"
"golang.org/x/tools/go/analysis/singlechecker"
)
func main() {
singlechecker.Main(emptyif.Analyzer)
}
Enter fullscreen mode Exit fullscreen mode
Now look at the code for the rule in the emptyif.go file:
package emptyif
import (
"go/ast"
"golang.org/x/tools/go/analysis"
)
var Analyzer = &analysis.Analyzer{
Name: "emptyif",
Doc: "reports empty if statements",
Run: run,
}
func run(pass *analysis.Pass) (any, error) {
for _, file := range pass.Files {
ast.Inspect(file, func(n ast.Node) bool {
if stmt, ok := n.(*ast.IfStmt); ok {
// We check if there is a body and if it is empty
if stmt.Body != nil && len(stmt.Body.List) == 0 {
pass.Reportf(stmt.Pos(), "empty if block")
}
}
return true
})
}
return nil, nil
}
Enter fullscreen mode Exit fullscreen mode
Let's explore what this diagnostic rule consists of:
ast.Inspect is used, allowing recursive inspection of tree nodes.IfStmt. The node is compared by type conversion: n.(*ast.IfStmt).stmt.Body.List contains a list of statements. If it is empty, the analyzer issue a warning.pass.Reportf outputs a message linked to the location in the source code.If everything is set up correctly, you can run your analyzer after building it:
go build -o emptyif.exe ./cmd/emptyif
go vet -vettool=D:\emptyif\emptyif.exe ./...
Enter fullscreen mode Exit fullscreen mode
The output will look something like this:
tests\emptyif_test\file1.go:12:2: empty if block
Enter fullscreen mode Exit fullscreen mode
That's all for now. We've explored the basics of writing our own static analyzer in Go. If you find this article useful, we can do a deeper overview in the future, where we'll examine a real-world diagnostic rule, show how the testing stage proceeds, and so on.
By the way, I think you've already guessed that we're working on our own static analyzer for Go. If you want to join in the EAP, keep an eye on our news.
\