2026-02-05 06:43:00
This guide covers how to set up OpenClaw (formerly Clawdbot) on your local machine and, most importantly, how to secure it so strangers can’t access your computer. If you are ready, then let’s get started! :)
First, open your terminal (Command Prompt or Terminal on Mac/Linux). You need to install the tool globally. Run this command:
curl -fsSL https://openclaw.ai/install.sh | bash
\ OR if using npm directly:
npm install -g openclaw
Once installed, start the configuration process:
openclaw onboard
Security Warning: You will see a warning that the bot works on your local machine. Read it and accept.
Quick Start: Select “Quick Start” for the easiest setup.

Model Selection: Choose your AI provider (e.g., OpenAI Codex or GPT-4). You will need to log in to your provider account.
\ Connect a chat platform — After the model is selected, OpenClaw asks you to set up a chat interface. Select your preferred platform (e.g., Telegram).
\
/newbot._bot).\ A similar process applies to WhatsApp, Discord, and other chat platforms.
\ Get Your User ID
You need to tell OpenClaw who is allowed to talk to it.
\ Pair Your Bot
Restart your gateway to apply changes:
openclaw gateway restart

Configure skills (optional) — OpenClaw can install skills (tools) to perform tasks such as sending emails or editing files. During onboarding, you can skip or install skills. If you choose to install, use npm as the node manager; otherwise, select Skip for now.
\ Provide API keys (optional) — Some skills require API keys (e.g., Brave Search API). During setup, you can say No if you don’t have keys yet.
\ Choose UI — OpenClaw offers a web‑based Control UI or a TUI. The TUI keeps everything in the command line and is recommended for first‑time setup. When ready, select Hatch in TUI to start the bot’s personality configuration. The bot will ask for its name and how to address you. After that, OpenClaw is ready to chat via the terminal and your chosen chat platform.
\ If you get stuck, please watch my YouTube tutorial:
https://youtu.be/D9j2tw5lps?si=IKmQFGwFmZ7L9hZ&embedable=true
Watch on YouTube: How to Set Up OpenClaw
OpenClaw can perform additional tasks after the initial setup.
\ Remember that each new capability increases the bot’s permissions, so enable them carefully and keep security in mind.
By default, giving an AI access to your computer carries risks. Follow these steps to lock it down.
Your bot shouldn’t be visible to the whole internet.
~/.openclaw/openclaw.json
gateway section.0.0.0.0 to 127.0.0.1 (loopback) This ensures only you (localhost) can access the gateway.Make sure your gateway requires a token:
authentication is set to mode: "token".Don’t let your bot talk to strangers.
"pairing" (requires approval)."disabled" so the bot can't be added to public groups where it might leak data....
"channels": {
"telegram": {
"dmPolicy": "pairing",
"groupPolicy": "mention"
}
}
...
Protect the files that store your API keys. Run this command to make sure only your user can read the credentials file:
chmod 700 ~/.openclaw/credentials
OpenClaw has a built-in tool to check for holes. Run this regularly:
openclaw security audit --deep --fix

If it finds issues, you can often fix them automatically with:
openclaw doctor --fix
Be careful when asking your bot to browse the web or read untrusted files. Bad actors can hide commands in text that trick the AI. Always use the Sandbox environment when experimenting with untrusted data.
After applying these security fixes, always restart your gateway:
openclaw gateway restart
\ If you want a simple walkthrough, please check my video tutorial:
https://youtu.be/rep62KFHtRE?si=FONdBK7aoKCoEddD&embedable=true
Watch on YouTube: How to secure OpenClaw Bot
OpenClaw gives you the power of a personal AI assistant that runs on your own hardware. When configured correctly, it can search the web, manage files, and respond to your chat messages across multiple platforms. However, because it uses tools that can execute commands on your system, security must be a first‑class concern.
\ Stay safe! Cheers! :)
\
2026-02-05 06:27:04
The artificial intelligence boom has had a profound impact on fintech and the access of UK residents to more comprehensive financial services. But the technology’s ability to unlock the potential of big data insights may mark a turning point for how we manage our wealth.
\ Big data has the power to transform savings by enabling hyper-personalized financial guidance and autonomous saving tools that are tailored to our spending patterns and financial goals.
\ Because artificial intelligence tools excel at contextually understanding data, the technology has paved the way for a boom in the quality of insights that customers can now access when managing their money.
\ Throughout the UK, financial institutions and apps are using AI-powered big data insights to provide more tailored recommendations, track expenses in real-time, and automate transfers to savings accounts, making it easier than ever to look after our wealth.
\ Data shows that already 28 million UK adults are using artificial intelligence tools to help manage their money, making personal finance the number one use case for AI nationwide today.
\ From tailored insights to fraud detection, big data is fundamentally changing our perception of money and the security we feel when saving. Let’s take a deeper look at the innovations that the technology is bringing to the table with the help of AI tools:
Big data allows fintech firms to accurately analyze critical factors about your cash flow, measuring your income, expenses, and transaction history to generate actionable insights using both AI and machine learning tools.
\ Machine learning (ML) is a natural companion to big data because the technology can seamlessly learn patterns from both structured and unstructured datasets to make predictions or decisions. This means that ML is a critical tool for providing spending insights while shaping product recommendations and fraud detection.
\ For example, using big data and ML, an app has the capability to notify users if their spending habits are on course to push them towards their overdraft in a month, or it may recommend a high-yield savings account to deposit surplus money into.
Beyond ML, big data can also support generative AI assistance for users and is becoming a powerful tool in wealth management for millions of adults in the UK.
\ Similarly, AI can interpret big data to study your habits based on your monthly income, where you spend your money, and what you save to provide tailored advice.
\ Already, platforms like Cleo, Mint, and Monarch Money are using AI algorithms to interpret the monthly spending patterns of users and can send messages about their most frequent outgoings, along with cost-effective alternatives.
\ There’s evidence that big data is supporting more efficiency in saving, with insights showing that those using AI-powered financial tools are cutting their overspending by around 15% on average.
Big data also provides real-time analysis of transaction patterns that can help banks detect and prevent instances of fraudulent activity instantly, helping more customers look after their finances and build trust in digital platforms.
\ This added security can also help to boost the safety of fintech services, with big data capable of assigning scores to transactions or accounts based on many different factors. This can help to efficiently launch investigations into suspicious activity, improving the user experience.
\ There’s also an element of personalization and support for financial decision-making in these enhanced security measures. Because risk factors can instantly explore previous spending behavior, it can also help fintech firms create bespoke rates for individuals looking for a loan based on a more comprehensive case-by-case credit scoring system.
Big data has long been a transformative innovation for the fintech sector, but it’s the recent emergence of generative AI that’s helped take personalization to the next level for the best possible results.
\ Already, millions of UK adults are turning to AI for bespoke spending insights and wealth management advice. With the ability to work frictionlessly with big data, it’s never been easier for individuals to know their financial behaviour inside out.
2026-02-05 05:46:50
WASHINGTON, Jan 31 - Elon Musk's SpaceX wants to launch a constellation of 1 million satellites that will orbit Earth and harness the sun to power AI data centers, according to a filing at the Federal Communications Commission.
Source - Reuters.com
Only 2 days later, Elon announced the merging of xAI and SpaceX.
\

\
Everybody thinks AI is the story.
\ 1 Million AI servers bathing in multiple kilowatts of infinite sunlight. Cooled by outerspace, streaming knowledge from above us. Straight down into our phones via Starlink on Starlink-enabled Tesla phones.
\ But it is not.
\ AI is the Trojan horse for highly-monitored payment rails, massive personal information rails, and now, … a new existential crisis. A reason to fear so much that we have no option but to behave. (Nukes tamed us, which is why there is no WW3 and the military-industrial complex is sour).
\ A reason so big we forget the then seemingly mundane.. like the sex crimes of a disgraced man who died decades ago … and worry about rockets in the hands of AI. Mega missiles, actually. Megatron, if we're unlucky.
\ 1 million years ago, humans threw spears at each other over a few metres. Now, they could potentially throw rocket-sized spears at each other ( “accidentally”).

\ Blame the AI, blame human error. Not evil intentions. Space is big, though, so the errors will likely be small.
\ Meanwhile, back on Earth is you. For closure, some justice for the victims of the disgraced man and his now disgraced friends will be put together.
\ A small win, but you take it and move on. For you are oh-so-distracted (Anybody know what's going on with President Maduro?). Move to your AI-powered car, get driven to your AI-powered job, to ponder what you're playing with when you put barely-mice-brained autonomous systems in charge of the most powerful rockets on the planet.
\ It's hard not to feel the impending doom.
\ (Not the Marvel movie. Stay focused).
\

\

It’s not fear per say but uncertainty. Volatility turned to the max.
\ If you thought Bitcoin was volatile, the potential threat of an AI-powered rocket malfunctioning seems to me to be a better source of volatility. Hence money.
\ Afterall.
\ “Volatility is vitality”, says Saylor.
\ One potential upside—You can literally pump the price of gold without any humans being involved, by letting AI rockets move some gold bars to the moon. Cost -- a few barrels of cheap Venezuelan oil turned rocket fuel.
\ (I just don't like the way people stick to these ancient rocks when modern gold is available. So, I somewhat like the idea of dumping gold quite far away).
\

Or you can let the AI do more rocket backflips (no more human-command-centre skills. Humans would now be ineffective) and get those TikToks going.
\ OR—Have it go wrong with the rocket going elsewhere. A great way to make money for shortsellers and fear mongers. Many of whom will also be AI bots.
\ Whichever way you see it, a lot of money will be put on the line. Of course, AI servers in space will be the story. But at the back of our minds, we shall really be keeping an eye on those rockets. Again, Judgement day playing rent-free.
\

\
I would prefer that instead of bootstrapping rockets to AI (or is it the other way around?), AI is bootstrapped to Bitcoin. So, it (not a rocket) can take it to the moon.
\ Yeah, doesn't make much sense right now. But after those AI servers are built, I bet you, they'll find trading Bitcoin to be their favorite activity.
\ Hodl.

P.S. I just compiled a 210-question quiz based on the 2026 version of the My First Bitcoin book for my startup edutech platform.
\ Find the complete quiz at https://bitcoinhighschool.com under the ‘My First Bitcoin’ category.
\ CTA: Do it now. Sats rewards for correct answers loading soon!
\

\ \ \
2026-02-05 05:00:08
Case studies are performed to validate the proposed reliability assessment framework and cyber insurance model. As shown in Fig. 6, a benchmark IEEE RTS-GMLC is deployed [29]. The IEEE RTS-GMLC incorporates the increasing share of renewable energy resources such as wind and solar energies. To study the effectiveness of mutual insurance, the 3-area test system are divided into 5 TGs. The IEEE RTS-GMLC is further augmented by incorporating the epidemic cyberattack model. The cyberattack parameters of the epidemic network are assigned as follows: 𝑍𝑒𝑝𝑖 = 2000 hrs, 𝑅𝑒𝑝𝑖 = 4 hrs, 𝜀 = 2, and 𝑐 = 0.8. A preliminary comparison is made on the system risk in the test system under various scenarios. Risk indices estimating load curtailment and fault coverage are adopted from [30]. Denote 𝐿𝐶 as the load curtailment and 𝐹𝐶 as the count of faulty buses at the m-th time step.
\ The Expected 𝐿𝐶 and 𝐹𝐶 are defined as follows: 𝐸𝐿𝐶 = 1 𝑁𝑚 ∑ 𝐿𝐶𝑚 𝑁𝑚 𝑇 𝑚=1 (10) 𝐸𝐹𝐶 = 1 𝑁𝑚 ∑ 𝐹𝐶𝑚 𝑁𝑚 𝑇 𝑚=1 (11) Parameters of the cyber-physical elements installed in the substations are listed in Table II. When the substation’s smart monitoring is functional, the server is connected to other elements. Otherwise, the server is disconnected from other elements. Six scenarios are studied to demonstrate the effectiveness of the job assignment and smart monitoring. As shown in Table III, the deployment of job assignment and smart monitoring technologies effectively reduces the ELC and EFC. Reduced ELC and EFC indicate enhanced security and reliability of power supply. The job assignment facilitates Scenario 2 with 20% improvement from Scenario 1 in both ELC and EFC. With the smart monitoring technology enforced, Scenario 4 improves 7% on ELC and EFC over Scenario 1. In Scenarios 5 and 6, smart monitoring plus the job assignment can further improve several percent from Scenarios 2 and 3 with job assignment alone.
\ The reliability-based OPF is carried out in MCS based on the state sampling method. The sampled period is 40 years with hourly time steps. The server smart technology deployment within the substations determines the SCT. Cyberattacks that penetrate the substation servers may disturb the grid operation by sending spurious commands to disconnect generation from the grid, causing physical load losses. The load loss statistics is then converted into the monetary reliability worth to estimate the cybersecurity insurance premiums. To highlight the merits of the proposed Shapley premium design, two case groups are created to compare job thread assignment, smart monitoring, and correlation coefficients at varying degrees. Case Group 1: Based on Scenario 1 (𝐽1 , 𝜇𝑏, 𝜆𝑏) where in the substation only a single job thread is available without smart monitoring. Case Group 2: Based on Scenario 6 (𝐽3 , 𝜇𝑐 , 𝜆𝑐)where the strongest job assignment and substation smart monitoring are enforced. To explore the loss characteristics in Case Group 1, Table IV summarizes the expected values, Standard Deviations, and Coefficients of Variation under various strengths of correlation 𝑟. CoV is obtained from the SD being divided by the expected value. The expected values come close to SDs, resulting in CoVs only fluctuating in a small range of [0.74 1.13].

Since a stronger correlation 𝑟 signifies the infectiousness of the epidemic model and tends to bring higher expected losses, the common cyber risk across TGs also increases. In Case Group 2, the incentive of investing on cyber-physical enhancement can be observed from Table V that expected losses are reduced substantially and reduction of SDs occurs to a lesser extent, with CoVs lying in [0.88 1.33]. In Fig. 7, the sampled SoI among the TGs are demonstrated in the Pearson correlation matrix.
\ The correlation is symmetric and correlation between each of the two TGs can be observed in the off-diagonal entries. Fig. 8(a) depicts the correlation matrix of the Case Group 1. When 𝑟 = 0, the SoI across the TGs are close to 0 with higher correlations between the neighboring TGs in the same areas. The correlations range around 0.45 as 𝑟 increases to 0.5. When 𝑟 = 1, the correlations across all TGs are above 0.9. The correlation matrix in Case Group 2 is as shown in Fig. 8(b). Due to reduced load losses, the correlations are in general weakened between the same pair of TGs in Case Group 1. Insurance premiums are designed to prepare TGs for catastrophic losses induced by probable cyberattack events. For interconnected TGs, mutual insurance accounting for respective marginal loss statistics would be a sensible option.
\ The premium with a high-risk loading offers solid indemnity, which may however be less financially appealing to potential participants. An ideal premium design should be meticulously formulated to avoid excessive financial burdens while providing sufficient loss indemnities for the insured parties. The highly infectious nature of the cyber epidemic model dictates a heavily skewed tail risk. To validate the design of the proposed cyber-insurance principle, herein (a) TCE premium 𝜋1 , (b) Coalitional premium 𝜋2 , and (c) Shapley premium 𝜋3 of this study are compared at various degrees of correlation of the TGs. The TCE Premium is the most conservative design predominantly responsive to the tail risk, providing great redundancy at the cost of high-risk loading. On the contrary, the Coalitional Premium is the most affordable package by excluding extreme high-loss events with low probabilities. The Shapley Premium is cooperative and tailored to add further coverage against the tail risk, striking a balance between the affordability and loss coverage.

To gauge the relative premium burden against the expected risk, RLC is defined as follows: 𝜌(ℒ𝑞) = 𝜋(ℒ𝑞)/𝐸[ℒ𝑞]− 1 (12) where 𝜌(ℒ𝑞) should be generally positive to gather sufficiency budget for loss coverage. While positive RLC is preferable against the unexpected extreme risk, excessively high RLC would discourage the TGs from insurance participation. In [17], the indemnities of 𝜋1(ℒ𝑞) are not clearly specified since the original design is tailored to a third-party insurer. In this paper, all premium designs are assumed to be mutual insurance. All participating entities are both insurers and insureds. For the sake of brevity, the indemnities of 𝜋1(ℒ𝑞) are proportionally allocated based on 𝛤𝑞 𝜓 (𝜋2): 𝛤𝑞 𝜓 (𝜋1 ) = ∑𝑞 𝜋1(ℒ𝑞) ∗ 𝛤𝑞 𝜓(𝜋2) ∑ 𝛤𝑞 𝜓 𝑞 (𝜋2) (13) In Tables VI and VII, 𝜋1 , 𝜋2 , and 𝜋3 are evaluated based on the loss statistics extracted from the two case groups with heavy tail risks. Characteristics of each design will be further elaborated numerically as follows. The premiums of Case Group 1 are shown in Table VI. In each TG, 𝜋1 , 𝜋2 , and 𝜋3 are positively correlated with the strength of correlation 𝑟. 𝜋1 has the most conservative payment schedule and can be financially burdensome. 𝜋1 may penalize the participants with heavy risk loading when extreme catastrophic events do not happen. Cost-effectiveness of 𝜋1 is unacceptably low because the maximum of 𝜌1 exceeds 3. On the flip side, 𝜋2 is an entry-level premium design devised to be the most affordable and evenly distributed package across the TGs. 𝜋2 offers small indemnities and the premiums collected from the TGs.


𝜌2 of some TGs can be slightly negative with indemnities supplemented by other TGs. However, the worse risk of 𝜋2 beyond expected losses could barely be covered. 𝜋3 rewards TGs of relatively low risk loading with high indemnities. While 𝜋1 provides higher indemnities than 𝜋3 , 𝜋3 offers comparable affordability to the coalitional platform of 𝜋2 . The proposed 𝜋3 substantially alleviates the insolvency hazard of 𝜋2 . 𝜌2 spans from -0.26 to 0.58. By contrast, 𝜌3 is dispersed in [-0.16 0.81], a typical range of risk loading. 𝜋3 offers a wider margin in risk loading than 𝜋2 to guarantee sufficient budget to cover individual risk.
\ In Table VII, risk loading in Case Group 2 generally increase due to the enhanced security measure that reduces tail risk profile. 𝜌1 has a maximum close to 4 and could be too high to motivate entities to participate in. 𝜋2 is evenly distributed against average risk, with 𝜌2 lying in [-0.24 0.72]. 𝜋3 renders ideal risk loading 𝜌3 to rarely exceed 1. High capacity of indemnity and low risk loading make the proposed 𝜋3 a potentially compelling insurance model in practice. The probability of insolvency Φ(𝜋) is another risk measure which quantifies the capability of the insurance to mitigate the insolvency. Φ(𝜋) is defined as the probability that the loss is greater than the indemnity: Φ(𝜋) = Pr[ℒ𝑞 > 𝛤𝑞 𝜓 (𝜋)] (14) As shown in Table VIII, in Case Group 1, 𝜋1 generally provides the best insolvency alleviation with lowest probabilities of insolvency. In fact, 𝜋1 is such a conversative premium design against risk that the insolvency in some cases is 0. While 𝜋3 leads to the insolvency being lower than 𝜋2 and greater than 𝜋1 , 𝜋3 has the affordability superior to 𝜋1 .
\ In Case Group 2, when the cyber risk is significantly reduced, 𝜋3 can restrain the insolvency to be about as low as that of 𝜋1 . Thus, 𝜋3 offers an economical option with relatively sufficient insolvency mitigation.

\
In this paper, a mutual insurance premium principle is designed to fairly share cyber risks across the participating TGs and control the overall insolvency risk. This study is among the first endeavors to approach the cyber-insurance by estimating the insolvency. In the case studies, it is shown the smart monitoring and job thread assignment solutions can work standalone or together to boost the reliability of TGs. Reduced insolvency probability is offered by the proposed Shapley premium while remaining as affordable as the coalitional 11 premium. More challenges may occur when real-life variables are factored in.
Since any two power system servers are to some extent connectable from each other, establishing the topology of cyber node connections could be complicated. Selecting weights to prioritize the crucial edges in the cyber node graph could be essential. There are also challenges on the actuarial end. First, accurate cyber risk estimation for specific systems would rely on long-term historical data set collection. How much risk loading a premium design reserved should be sufficient against tail risk is still left to further exploration. Second, the proposed Shapley insurance scheme is designed to achieve two goals: insolvency risk control and fair distribution of indemnity. Although these goals are achieved most of the time, there are exceptions especially when some participants are struck by unexpectedly high losses due to inadequate self-protection. This shall motivate future work in designing more insurance schemes to reflect selfprotection level and thus incentivize cyber-security investment.
[1] R. V. Yohanandhan, R. M. Elavarasan, P. Manoharan, and L. Mihet-Popa, “Cyber-Physical Power System (CPPS): A review on modeling, simulation, and analysis with cyber security applications,” IEEE Access, vol. 8, pp. 151019–151064, 2020, doi: 10.1109/ACCESS.2020.3016826.
[2] M. Barrett, Framework for Improving Critical Infrastructure Cybersecurity, Version 1.1, NIST Cybersecurity Framework, 2018. [Online]. Available: https://doi.org/10.6028/NIST.CSWP.04162018
[3] A. Huseinović, S. Mrdović, K. Bicakci, and S. Uludag, “A survey of denial-of-service attacks and solutions in the smart grid,” IEEE Access, vol. 8, pp. 177447–177470, 2020, doi: 10.1109/ACCESS.2020.3026923.
[4] J. Hong, R. F. Nuqui, A. Kondabathini, D. Ishchenko, and A. Martin, “Cyber attack resilient distance protection and circuit breaker control for digital substations,” IEEE Transactions on Industrial Informatics, vol. 15, no. 7, pp. 4332–4341, Jul. 2019, doi: 10.1109/TII.2018.2884728.
[5] H. Lin, Z. T. Kalbarczyk, and R. K. Iyer, “RAINCOAT: Randomization of network communication in power grid cyber infrastructure to mislead attackers,” IEEE Transactions on Smart Grid, vol. 10, no. 5, pp. 4893–4906, Sept. 2019, doi: 10.1109/TSG.2018.2870362.
[6] T. Duan et al., “Intelligent processing of intrusion detection data,” IEEE Access, vol. 8, pp. 78330–78342, 2020, doi: 10.1109/ACCESS.2020.2989498.
[7] L. Wei, A. I. Sarwat, W. Saad, and S. Biswas, “Stochastic games for power grid protection against coordinated cyber-physical attacks,” IEEE Transactions on Smart Grid, vol. 9, no. 2, pp. 684–694, Mar. 2018, doi: 10.1109/TSG.2016.2561266.
[8] A. Binbusayyis and T. Vaiyapuri, “Identifying and benchmarking key features for cyber intrusion detection: An ensemble approach,” IEEE Access, vol. 7, pp. 106495–106513, 2019, doi: 10.1109/ACCESS.2019.2929487.
[9] K. Yamashita et al., “Measuring systemic risk of switching attacks based on cybersecurity technologies in substations,” IEEE Transactions on Power Systems, vol. 35, no. 6, pp. 4206–4219, Nov. 2020, doi: 10.1109/TPWRS.2020.2986452.
[10] K. Huang, C. Zhou, Y. Qin, and W. Tu, “A game-theoretic approach to cross-layer security decision-making in industrial cyber-physical systems,” IEEE Transactions on Industrial Electronics, vol. 67, no. 3, pp. 2371–2379, Mar. 2020, doi: 10.1109/TIE.2019.2907451.
[11] M. Li et al., “Hybrid calculation architecture of cyber physical power system based on correlative characteristic matrix model,” in Proc. IEEE CYBER, 2018, pp. 584–588, doi: 10.1109/CYBER.2018.8688204.
[12] Y. Chen, J. Hong, and C.-C. Liu, “Modeling of intrusion and defense for assessment of cyber security at power substations,” IEEE Transactions on Smart Grid, vol. 9, no. 4, pp. 2541–2552, Jul. 2018, doi: 10.1109/TSG.2016.2614603.
[13] B. Cai et al., “Application of Bayesian networks in reliability evaluation,” IEEE Transactions on Industrial Informatics, vol. 15, no. 4, pp. 2146–2157, Apr. 2019, doi: 10.1109/TII.2018.2858281.
[14] B. Falahati, Y. Fu, and M. J. Mousavi, “Reliability modeling and evaluation of power systems with smart monitoring,” IEEE Transactions on Smart Grid, vol. 4, no. 2, pp. 1087–1095, Jun. 2013, doi: 10.1109/TSG.2013.2240023.
[15] P. Ghazizadeh et al., “Reasoning about mean time to failure in vehicular clouds,” IEEE Transactions on Intelligent Transportation Systems, vol. 17, no. 3, pp. 751–761, Mar. 2016, doi: 10.1109/TITS.2015.2486523.
[16] M. Xu and L. Hua, “Cybersecurity insurance: Modeling and pricing,” North American Actuarial Journal, vol. 23, no. 2, pp. 220–249, 2019.
[17] P. Lau et al., “A cybersecurity insurance model for power system reliability considering optimal defense resource allocation,” IEEE Transactions on Smart Grid, vol. 11, no. 5, pp. 4403–4414, Sept. 2020, doi: 10.1109/TSG.2020.2992782.
[18] P. Lau et al., “A coalitional cyber-insurance design considering power system reliability and cyber vulnerability,” IEEE Transactions on Power Systems, vol. 36, no. 6, pp. 5512–5524, Nov. 2021, doi: 10.1109/TPWRS.2021.3078730.
[19] I. Vakilinia and S. Sengupta, “A coalitional cyber-insurance framework for a common platform,” IEEE Transactions on Information Forensics and Security, vol. 14, no. 6, pp. 1526–1538, Jun. 2019, doi: 10.1109/TIFS.2018.2881694.
[20] D. Monderer, D. Samet, and L. S. Shapley, “Weighted values and the core,” International Journal of Game Theory, vol. 21, no. 1, pp. 27–39, 1992.
[21] S. Béal et al., “The proportional Shapley value and applications,” Games and Economic Behavior, vol. 108, pp. 93–112, 2018, doi: 10.1016/j.geb.2017.08.010.
[22] E. Algaba, V. Fragnelli, and J. Sánchez-Soriano, Handbook of the Shapley Value. Boca Raton, FL, USA: CRC Press, 2019.
[23] B. Falahati and Y. Fu, “Reliability assessment of smart grids considering indirect cyber-power interdependencies,” IEEE Transactions on Smart Grid, vol. 5, no. 4, pp. 1677–1685, Jul. 2014, doi: 10.1109/TSG.2014.2310742.
[24] M. Schiffman, “Common Vulnerability Scoring System (CVSS).” [Online]. Available: http://www.first.org/cvss/
[25] Y. Satotani and N. Takahashi, “Depth-first search algorithms for finding a generalized Moore graph,” in Proc. TENCON, 2018, pp. 832–837, doi: 10.1109/TENCON.2018.8650418.
[26] C. Wang et al., “Impacts of cyber system on microgrid operational reliability,” IEEE Transactions on Smart Grid, vol. 10, no. 1, pp. 105–115, Jan. 2019, doi: 10.1109/TSG.2017.2732484.
[27] Z. Yang, C. Ten, and A. Ginter, “Extended enumeration of hypothesized substation outages incorporating overload implication,” IEEE Transactions on Smart Grid, vol. 9, no. 6, pp. 6929–6938, Nov. 2018, doi: 10.1109/TSG.2017.2728792.
[28] C.-W. Ten et al., “Impact assessment of hypothesized cyberattacks on interconnected bulk power systems,” IEEE Transactions on Smart Grid, vol. 9, no. 5, pp. 4405–4425, Sept. 2018, doi: 10.1109/TSG.2017.2656068.
[29] C. Barrows et al., “The IEEE reliability test system: A proposed 2019 update,” IEEE Transactions on Power Systems, vol. 35, no. 1, pp. 119–127, Jan. 2020, doi: 10.1109/TPWRS.2019.2925557.
[30] G. Cao et al., “Operational risk evaluation of active distribution networks considering cyber contingencies,” IEEE Transactions on Industrial Informatics, vol. 16, no. 6, pp. 3849–3861, Jun. 2020, doi: 10.1109/TII.2019.2939346.
\
:::info Authors:
Pikkin Lau, Student Member, IEEE, Lingfeng Wang, Senior Member, IEEE, Wei Wei, Zhaoxi Liu, Member, IEEE, and Chee-Wooi Ten, Senior Member, IEEE
:::
:::info This paper is available on arxiv under CC by 4.0 Deed (Attribution 4.0 International) license
:::
\
2026-02-05 04:57:57
Washington, DC, February 4th, 2026/CyberNewsWire/--MomentProof, Inc., a provider of AI-resilient digital asset certification and verification technology, today announced the successful deployment of MomentProof Enterprise for AXA, enabling cryptographically authentic, tamper-proof digital assets for insurance claims processing.
MomentProof’s patented technology certifies images, video, voice recordings, and associated metadata at the moment of capture, ensuring claims evidence is protected against AI-based manipulation, deepfakes, and other malicious digital alterations.
“We are pleased to ensure the authenticity of images and recordings essential for the insurance industry with patented MomentProof technology,” said Ahmet Soylemezoglu, President and the Co-Founder of MomentProof, Inc. “Delivering MomentProof Enterprise for AXA demonstrates our commitment to guaranteeing that insurance claims are backed by authentic digital assets resilient to AI-based and other forms of manipulation.”
By integrating MomentProof-certified digital assets into its claims workflow, AXA eliminated probabilistic post-processing steps traditionally used to assess authenticity, while significantly reducing fraud risk and claims processing time.
“MomentProof-certified images now provide AXA with verified authenticity for all captured claim data, including precise location, timestamp, device information, and confirmation of the authorized individual. This robust verification process has led to a substantial reduction in fraud risk and has accelerated the claims processing timeline,” stated Levent Serinol, Senior Director at AXA. “With MomentProof joining AXA’s industry-leading anti-fraud technology framework, AXA continues to strengthen its position at the forefront of claims security and efficiency,” added Director Serinol.
MomentProof operates in two patented phases: Certification, which cryptographically seals digital assets in real time and issues a certificate of authenticity; and Verification, which validates certified assets with 100% cryptographic certainty, delivering deterministic pass/fail results. Applications of MomentProof extends to Journalism, Law, Chain of Custody, Digital Forensics.
MomentProof Enterprise is available as a GDPR- and SOC 2-compliant cloud service or as an on-premises deployment, with Mobile, Enterprise, and Messaging APIs for seamless integration.
Founded in 2022 in Europe, MomentProof, Inc. has offices in Washington, DC and delivers tamper-proof, AI-resilient digital asset protection for insurance and digital authenticity proof applications. The company is actively seeking qualified distributors to expand in U.S. markets.
Website: www.MomentProof.com
MomentProof, Inc. provides patented technology for certifying and verifying the authenticity of digital assets at the moment of capture. Its AI-resilient solutions enable organizations to protect images, videos, audio, and metadata from manipulation, supporting applications in insurance, journalism, legal processes, and digital forensics.
Founded in 2022 and wıth offices in USA, Europe, and Asia MomentProof offers both cloud-based and on-premises deployments to meet varying compliance requirements, including GDPR and SOC 2.
Laura Smith
MomentProof Technologies
:::tip This story was published as a press release by Cybernewswire under HackerNoon’s Business Blogging Program
:::
\
2026-02-05 04:47:39
:::info I’m an AI automation and GTM consultant with 7+ years in digital assets and enterprise strategy, now building agentic AI workflows for B2B sales and VC ops; my research has supported $10M+ VC rounds and informed strategy at brands like Nike, HBS, Lacoste, McKinsey, and BCG.
:::
A typical VC analyst reviews around 3,000 decks annually and invests in roughly 9. Average time spent per deck: 2-3 minutes (up to 10 if we include preliminary research). This means 99.7% of their time is “wasted”.
This isn’t a dealflow problem. Most VCs have dealflow. The best funds are seeing 700+ decks a quarter. The issue is triage throughput.
Here’s what actually happens:
Deck hits inbox → someone says “we’ll review this” → founders send a follow-up two weeks later → by the time anyone actually reads it, the round is closed or the founder has moved on.
The best opportunities don’t wait for you to find them. They are often buried under the noise of a thousand average decks, or they move so fast they're gone before you even open the email. Finding a great company is quite literally like finding a needle in a haystack.
If you run or work at a fund that sees 200+ decks a quarter but only has bandwidth to seriously evaluate 20-30, this piece is for you.
I’ve been in crypto and investing since 2017, where I first began navigating high-volume, data-heavy markets. I’ve sat on both sides of the table, evaluating opportunities and watching how funds evaluate them. For the past year, I’ve applied that experience to building AI automation systems for operators.
The VC triage problem kept surfacing in conversations, so I built a system specifically to solve it.
\
The data is overwhelming. According to Allvue’s 2025 GP Outlook Survey, AI adoption among private market firms hit 82% by year-end 2024, up from just 47% the previous year. The number of data-driven VC firms jumped 20% from 2023 to 2024 alone.
Deloitte’s 2025 M&A Trends Survey found that 97% of respondents have begun incorporating generative AI or advanced data analytics into their dealmaking processes, with digitization rising significantly in target identification (80%) and target screening (79%).
The elite funds have been throwing resources at this problem:
SignalFire raised additional capital in April 2025, significantly expanding its assets under management. Its Beacon AI platform analyzes vast amounts of organizational and workforce data, serving as what founder Chris Farmer described to Crunchbase as “the fabric that stitches the entire firm together.”
EQT’s Motherbrain platform has been instrumental in sourcing several direct investments. It integrates numerous external data sources with the firm’s internal network insights. At their 2024 Capital Markets Day, EQT outlined its goal to “build the most AI-literate investment organization in the world.”
Nik Storonsky, founder of Revolut, co-founded the AI-driven VC firm QuantumLight, which uses proprietary machine learning algorithms to source investments and claims to “almost eliminate human judgment” from the process.
However, some of these systems were built on pre-generative AI architecture. They required massive teams and eight-figure budgets to build. They’re optimized for their specific workflows, not customizable. And whether they’ve actually “solved” triage, or just thrown enterprise resources at it, is an open question.
I also think “human VCs” can’t be entirely automated as relationships and human connection plays a big role in the industry. Instead these automations should be applied to the highest leverage and AI-fit parts.
What’s clear is that the problem is real, the market is moving, and generative AI has changed what’s possible.
\
The market is moving. But most implementations reveal how early we still are.
Tim Draper launched an AI “digital twin” that lets founders pitch via text or voice chat. The direction is right, AI handling the top of funnel so humans focus on high-signal deals. But Draper himself admits it’s “flaky.” More fundamentally, the system doesn’t accept pitch deck uploads, missing the structured information that decks provide.
Tim Draper’s chatbot
Alliance DAO is the gold standard for crypto accelerators, vetting over 3,000 startups a year. At that volume, they have no choice but to be systematic. However, their entry point relies on a heavy initial application consisting of 39 different questions.
While this extracts deep data, it creates a massive adverse selection risk. The “hottest” founders, the ones with multiple term sheets and zero time to spare, often won’t fill out 40-question forms. They don’t have to. This means the most systematic funds can end up seeing the most desperate founders, while the elite talent bypasses the “inbox” entirely.
\
What’s missing across the board is a Zero-Touch Interview layer.
Instead of forcing the founder to do the manual labor of data entry, a modern system should be able to perform a Key Claims Audit autonomously. By extracting data from the deck, verifying team pedigree via LinkedIn, and cross-references technical claims against public data, a fund could get the same depth as a 15-minute introductory call, without either party ever picking up the phone. You get high-level signal with zero-click friction.
The pattern is clear: right problem, incomplete solutions. Most funds are stuck choosing between a “flaky” chatbot or a form that scares away the best talent.
Historically, “quant” funds like SignalFire or EQT had to spend millions building custom models just to understand unstructured text. Today, Generative AI allows any fund to quantify the qualitative, turning messy pitch decks into structured, comparable data without a dedicated team of data scientists.
You no longer need an eight-figure engineering budget to have a quantitative edge; you just need the right architecture.
I spent the last month architecting the solution myself: The 60-Second Triage Engine.
\
An end-to-end workflow that turns inbound startup decks into scored CRM entries with thesis-aligned analysis. The whole thing runs in sub-60 seconds per deck 24/7.
A simple form (I used Typeform, any other form e.g. Tally works as well) with as few or as many questions as you like. Crucially, it includes an area to submit the pitch deck.
All of this occurs without any human input beyond the startup’s initial submission:
The system extracts text from the PDF, pulls LinkedIn data, scrapes the company website, and cross-references claims against public data sources. It’s doing in seconds what would take an analyst 15-20 minutes.
It then uploads this information directly to your CRM, whether that’s HubSpot (as shown in the images below) or a simple Google Sheet, so you can easily track all submissions and revisit them when needed.
To show you what this looks like in practice, I ran a publicly available deck through the system, RoboForce, a robotics company that raised $10M in 2025:
This gets passed to the CRM as seen above, but it didn’t fit in the screenshot.
RoboForce — Investment Memo (January 22, 2026)
Conviction Level: Med
The Thesis (1 sentence):
RoboForce is a deep-tech robotics company leveraging proprietary physical AI (RF-Net) and an elite team to automate labor in harsh industrial environments, but the investment risk hinges entirely on resolving the conflicting claims regarding current commercial deployment status.
Key Claims Audit:
Claim: RoboForce is developing autonomous, mobile manipulation robots (“Robo-Labor”) named TITAN.
Status: SUPPORTED
Sources: Pitch Deck; Website
Claim: The core IP is a unique “AI Expert Model with 1mm Accuracy” for complex manipulation tasks (Pick, Place, Press, Twist, Connect).
Status: SUPPORTED
Sources: Pitch Deck; Website
Claim: Founder Leo Ma is a high-caliber executive with a track record of taking a company (CYNGN) to Nasdaq.
Status: SUPPORTED
Sources: LinkedIn Analysis
Claim: The team includes engineers and leaders from Tesla, Amazon Robotics, Google, Waymo, and CMU/UMich.
Status: SUPPORTED
Sources: Pitch Deck; Website
Claim: The company is planning or executing an “Alpha Robot Onsite Test” in 2024, indicating a pilot stage.
Status: INCONSISTENT
Sources: Pitch Deck (contradicts Website)
Claim: The company is “Production-ready with active commercial deployments” and initial pilots are already active.
Status: INCONSISTENT
Sources: Website (contradicts Pitch Deck)
Claim: RoboForce has secured more than 11,000 robot orders through letters of intent (LOI).
Status: SUPPORTED
Sources: Website
Claim: Institutional backing includes Myron Scholes, Gary Rieschel, and Carnegie Mellon University.
Status: SUPPORTED
Sources: Pitch Deck; Website
Claim: Total funding is $15 million, including a $10 million early-stage announcement.
Status: SUPPORTED
Sources: Website
Claim: Current ARR, Burn Rate, and Gross Margin metrics are not disclosed.
Status: SUPPORTED
Sources: Pitch Deck
Bull Case:
- Exceptional Pedigree and Founder-Market Fit: The founding team has direct experience building and scaling foundational AI/Robotics technology (Baidu, CMU) and successfully taking a company public (CYNGN), lending strong credibility to the technical claims.
- Differentiated Deep Tech IP: The company is developing a full-stack, proprietary solution including the TITAN hardware and the RF-Net foundation model, claiming 1mm precision and continuous learning capability, positioning it as a foundational platform for "Physical AI."
- Strong Indicated Market Demand: The company explicitly targets a massive and worsening labor shortage in utility-scale solar construction and has secured over 11,000 robot orders via LOI.
- Authoritative Validation: The product has been highlighted by NVIDIA CEO Jensen Huang at GTC, and the founder has been recognized as a WEF Technology Pioneer (Future recognition).
Bear Case:
- Contradiction on Commercial Status: The most critical risk is the INCONSISTENCY between the Pitch Deck (2024 Alpha Test/PoC stage) and the Website (Production-ready/Active commercial deployment). This confusion suggests a misalignment on execution timeline or marketing vs. reality.
- Unverified Financial Health: Core financial metrics required for evaluating a capital-intensive deep-tech play (ARR, Burn Rate, Gross Margin, Churn) are Not Disclosed.
- Execution Risk: This is a full-stack, proprietary hardware and software venture, demanding significant, sustained capital and flawless execution across complex mechanical engineering, AI, and control systems.
- Lack of Commercial Proof: While 11,000 LOIs are cited, the company has not provided verifiable customer logos, signed contracts, or revenue figures to substantiate the claimed active pilots.
Power Law Test: PLAUSIBLE
Hard Truths (3 questions):
- Which statement regarding product maturity is accurate: the Pitch Deck's "Alpha Robot Onsite Test 2024" or the Website's "Production-ready with active commercial deployments"?
- What is the current burn rate and capital runway, given the $15 million raised and the evident high R&D cost of full-stack hardware/software development?
- Can the company immediately provide signed customer contracts or verifiable revenue figures to substantiate the 11,000 robot letters of intent (LOI) and active pilot claims?
The model is trained using two mechanisms:
First, a prompt/context window where you define your investment criteria directly. Your sector focus. Your green and red flags. You assign weights based on what actually matters to your fund.
For instance, this was used for RoboForce, but this could be made much more comprehensive:
## Weights
1) Team–Market Fit (40%)
2) Traction vs. Claims (30%)
3) Verification Score (30%)
## Tier Thresholds
- Priority: score >= 75
- Standard: score 50–74
- Pass: score < 50
# Here are some examples of startup scores that I have made previously:
{{ JSON.stringify($json.examples, null, 2) }}
Second, a vector database trained on your portfolio companies and the scores you’d retroactively assign them. Did successful and unsuccessful investments have common themes? What was the industry? Was the founder first-time or not? This “trains” the system on pattern recognition specific to your fund’s actual history.
In practice, the vector database performs a semantic search across your entire deal history, surfacing 'lookalike' companies to see how this new opportunity aligns with your previous investment decisions.
Below is a view of the sample vector database used for our run.
Every deck gets a score (0-100) and a tier:
VCs receive a Slack notification, while top-tier startup applicants get instant emails or SMS messages with a Calendly, Zoom, or Google Meet link. Lower-scoring applicants instead receive a personalized thank-you email.
Here’s what shifts when this system is running:
1. Response time drops from weeks to hours
Founders get a reply the same day they submit. Even if it’s “not a fit right now,” they know and may even get automatic feedback. This alone changes how founders perceive your fund.
2. Partners stop drowning in noise
If you’re only seeing Tier 1 alerts, you’re not wading through 200 decks to find the 3 that matter. You’re seeing the 3 and can go deep immediately.
3. You stop losing deals to lag
The “we’ll circle back” problem goes away. If it’s Tier 1, you know within an hour. If the founder is raising on a tight timeline, you’re not finding out two weeks later when the round is closed.
4. Pattern recognition improves
Because everything is logged consistently, you start seeing patterns. Which sources send the best deals (e.g. if you have multiple forms). Which sectors are overrepresented in your inbound. Where your thesis is too narrow or too broad.
5. Junior team members become more effective
Instead of “read these 30 decks and tell me which are interesting,” it’s “here are the 5 Tier 1s from this week, do deep research on these specific questions.”
The funds implementing AI for triage will compound their advantages: faster response times → better deal access → better returns → better brand → more inbound → better LP terms.
You don’t need eight figures and a dedicated AI team to build this anymore. Generative AI changed the equation. You need the right architecture and someone who knows how to configure it for your thesis.
\
Here's what the numbers look like for a mid-sized fund processing 2,000 decks per year (roughly 500 per quarter):
Direct Cost Savings
Decks/year: 2,000
Total triage cycle saved: 10 minutes/deck (Skim + Research + CRM Entry)
Blended hourly cost (90% Associate @ $156/hr, 10% Principal @ $233/hr): $164/hr
==~$55K/year in direct labor savings==
Opportunity Cost Unlocked
~333 senior hours freed annually
Additional high-quality founder interactions enabled: ~166 per year
Conservative conversion to investment: ~2%
Incremental investments enabled: ~3 per year
Expected value per investment: $250K–$500K
==Implied expected opportunity value: ~$0.8M–$1.6M==
The labor cost is intuitive, but the opportunity cost is where it gets brutal.
\
This architecture is a foundation, not a finish line. The same logic of quantifying the qualitative can be extended across the entire lifecycle of a fund:
The goal of these extensions is simple: to extract the success patterns hidden in your fund’s unstructured data.
\
The goal isn’t to eliminate human judgment. It’s to make sure human judgment is spent on deals where it actually matters.
If you want more content like this, I created a new Substack for tactical tips fort investment professionals. I want to keep The Internet Economy more strategic, while Automated Alpha more tactical.
\
I don’t believe in one-size-fits-all SaaS for venture capital. Every fund has a unique thesis, specific green flags, and a distinct culture. A generic tool cannot capture the nuance of your investment committee.
Instead, I work with a small number of funds to build proprietary intelligence layers that they own forever. My work falls into two categories:
I prioritize quality over volume. I only engage with 1 or 2 new funds each month to ensure the systems I build are robust and truly aligned with the GP’s vision.
If you are ready to modernize your triage process or want to discuss a broader AI roadmap for your firm, let’s talk.
You can reach me directly at [email protected] or DM me on LinkedIn.
\