2025-01-30 19:00:10
2 Background and Motivation
2.1 Decentralized Computing Infrastructure and Cloud
4 Detailed Design of DeFaaS and 4.1 Decentralized Scheduling and Load Balancing
4.2 Decentralized Event Distribution
4.3 API Registration and Access Control
4.5 Logging and Billing and 4.6 Trust Management
4.7 Supporting Multi-Cloud Service Mesh
5 Implementation and Evaluation
Our platform enables decentralized logging for the services in multi-cloud data centers. For performance reason, instead of on-chain logging, the decision is to use off-chain logging. It is preferred to use decentralized logging infrastructure. Fortunately, IPFS [Benet(2014)] has been proposed and experimented in the past as decentralized logging infrastructure[3].
\ The API gateway nodes and API end-point providers may bill the users. For billing, the gateway nodes can keep track of the number of requests from each user, success and response for each request including function execution triggered by the events, and the involved end-point providers. Later, the gateway nodes can bill the users by sending requests and receipts to the management blockchain. If an end-point is offered as a service, the gateway nodes can bill the users on behalf of the end-point providers.
\ For performance reasons, billing is not at per request level. The gateway nodes can accumulate statistics, and bill the users when the amount reaches certain threshold or when the time interval reaches a billing cycle (e.g., a month). This means that billing is asynchronous. Such design significantly reduces the overhead comparing with the alternative that requires on-chain payment transaction for each API call. Users need to provide wallet address and/or make deposit to the management blockchain. For instance, a user may deposit USDC to a designated smart contract first for the next billing cycle[4]. The gateway nodes keep a cache of each user’s balance. To avoid negative balance, gateway nodes can use a watermark value. When the accumulated bill reaches the watermark, it will send a billing receipt to the management chain.
\ On the other hand, in order to prevent a gateway node from overcharging the user, the system requires the gateway node to keep a copy of processed requests. Each request includes a nonce and the user creates a digital signature on the request. Therefore, the gateway node cannot create fake requests and the user cannot deny a submitted request.
Trust management is necessary for a decentralized system. Some of the main entities involved in the system include: cloud providers, API gateway nodes, FaaS end-point providers, blockchain nodes, and users. As described earlier, cloud providers are somewhat trusted. Note that in our current design, FaaS is mainly for off-chain applications and off-chain processing. The management blockchain (Besu) can be Proof-of-Stake based. Trust management of other entities like gateway nodes and end-point providers can be stake based, reputation based or a hybrid combining stake and reputation. For instance, staking can be required for the gateway nodes. Entities that deviate from the protocol or act maliciously can be penalized (e.g., slashing). To minimize impact to performance, dispute resolution based process can be applied. The system stores all the API calls and responses in decentralized logs. When there is a dispute, a user can raise a claim against either the involved gateway node or end-point service provider. Then on-chain governance can be used to resolve the dispute. Such mechanism can be found in many blockchain projects like blockchain insurance and on-chain governance of protocol participants.
The concept of service mesh [Li et al.(2019)] is an extension of micro service. With service mesh, a group of micro services are connected to form a processing pipeline. Existing design and implementation of service mesh adopts the idea of software-defined network (SDN) to separate common functions and service specific functions. This separation greatly improves micro services from various perspectives, such as efficient management and flexible deployment. However, existing service mesh architecture does not support multi-cloud. This greatly limits the use and adoption of service mesh when an application relies on micro services that belong to multiple cloud service providers.
\ DeFaaS provides all the essential functions to support service mesh cross multiple cloud service providers. From architecture perspective, the DeFaaS management blockchain serves as the coordinator of multiple control planes. These control planes are managed by different cloud service providers, and each of them manages its own service mesh. When a request is sent to a service provider, but its corresponding control plane cannot resolve corresponding information, it can query the management blockchain.
\
:::info Authors:
(1) Rabimba Karanjai, Department of Computer Science, University of Houston ([email protected]);
(2) Lei Xu, Department of Computer Science, Kent State University;
(3) Lin Chen, Department of Computer Science, Texas Texh University;
(4) Nour Diallo, Department of Computer Science, University Of Houston;
(5) Weidong Shi, Department of Computer Science, University Of Houston.
:::
:::info This paper is available on arxiv under CC BY-NC-ND 4.0 DEED license.
:::
[3] To protect confidentiality, logs are encrypted when stored over IPFS.
\ [4] Note that this step may involve cross-chain transactions.
2025-01-30 19:00:07
\
DARPA wants AI to help anticipate future money laundering and other illicit financial activities before they even happen.
\ With the proposed Anticipatory and Adaptive Anti-Money Laundering (A3ML) research program, the US Defense Advanced Research Projects Agency (DARPA) is looking to eliminate global money laundering entirely, according to the A3ML special notice.
\
“A3ML aims to develop algorithms to sift through financial transactions graphs for suspicious patterns, learn new patterns to anticipate future activities, and develop techniques to represent patterns of illicit financial behavior“
DARPA, A3ML Special Notice, December 2024
\ “DARPA seeks to revolutionize the practice of anti-money laundering through its A3ML program,” the program description reads.
“A3ML aims to develop algorithms to sift through financial transactions graphs for suspicious patterns, learn new patterns to anticipate future activities, and develop techniques to represent patterns of illicit financial behavior in a concise, machine-readable format that is also easily understood by human analysts.
\ “The program’s success hinges on algorithms’ ability to learn a precise representation of how bad actors move money around the world without sharing sensitive data.”
“DARPA wants to eliminate global money laundering by replacing the current manual, reactive, and expensive analytic practices with agile, algorithmic methods”
DARPA, A3ML Special Notice, December 2024
\ The A3ML special notice highlights a federal indictment alleging that Chinese underground banking launders money for the Sinaloa cartel and that half of North Korea’s nuclear program is financed by laundered funds.
\ DARPA makes no mention of digital currencies or cryptocurrencies; however, the money laundering sources cited in the special notice do.
For example, the document cites a Congressional Research Service report that has a section dedicated to the question of “Cryptocurrency Regulation,” which states:
\ “Whether (or to what extent) the digital asset industry requires enhanced AML regulation and how certain financial technology (fintech) falls within the scope of US sanctions were issues contemplated by the 118th Congress and may be further addressed in the 119th Congress […]
“The [Congressional] bills also raised complex policy questions regarding the desired scope of AML regulations for virtual asset service providers, anonymizing services, virtual currency or digital asset kiosks, and other decentralized service providers, such as un-hosted wallet providers, digital asset mixers, miners, validators, and other nodes in the cryptocurrency-related ecosystem.”
\
“if successful, A3ML would make it prohibitively expensive for our adversaries to transfer illicit value through the global financial system. the technical hypothesis: illicit finance tactics, techniques, and procedures can be algorithmically extracted from diverse data sources and represented in a generic, sharable form”
DARPA Program Manager David Dewhurst, A3ML, December 2024
\
Leading the proposed A3ML program is David Dewhurst, a fan of beginning sentences with lowercase letters, who joined DARPA as a program manager in April 2024 but worked on DARPA projects in various roles during his almost three-year tenure at Charles River Analytics, according to his LinkedIn profile.
\ In a recent LinkedIn post, Dewhurst mentioned that the goal was “to stop adversaries from laundering money to evade sanctions, buy weapons, and fund drugs that kill Americans,” and that the technical hypothesis for A3ML was that “illicit finance tactics, techniques, and procedures can be algorithmically extracted from diverse data sources and represented in a generic sharable form.”
\ Do these illicit finance tactics, techniques, and procedures include the use digital assets and cryptocurrencies?
\
“we can align incentives — lower compliance costs and risks for industry, increase accuracy and precision for usg, massively increase privacy for ordinary Americans — with the right technology”
DARPA Program Manager David Dewhurst, A3ML, December 2024
\ If and when the A3ML program is launched and is successful, will the algorithms remain in the hands of the Pentagon, or will they also be shared with allies or become commercially available?
\ Apart from the immense good that can come from eliminating money laundering that funds the killing of Americans, could the tools and tactics developed contribute to creating precrime algorithms that could one day be weaponized against American citizens?
What could this mean for law-abiding crypto enthusiasts and/or Bitcoin holders in the future?
\ DARPA intends to hold an industry day for the A3ML program in the coming weeks.
\
:::info Tim Hinchliffe, Editor, The Sociable
:::
\
2025-01-30 18:50:08
\ The supply chain is only as strong as its weakest link. For logistics companies operating in an evermore complicated cybersecurity and technological environment, this is third-party partners.
\ A recent report from Hexnode surveyed 1,000 IT professionals across small and mid-sized supply chain organizations and revealed a deeply concerning trend. Over half (52%) of the organizations encountered cybersecurity incidents stemming from third-party vendors on at least one occasion.
\ Threat actors are exploiting this weak link by striking trusted partners to infiltrate their true targets. As a result, hackers bypass traditional defenses and get in the back door to wreak some real damage – disrupting operations, compromising data, and damaging reputations.
\ Strong third-party risk management (TPRM) programs are the solution, yet the report alarmingly found that almost one in five (15%) businesses bypass this critical process altogether. Let’s explore how logistics operators can safely leverage the expertise of third-party partners while protecting themselves from what’s increasingly the supply chain’s ignored vulnerability.
Supply chains are built upon intricate networks of relationships between organizations and their third-party service providers. However, malicious actors know this and are more often trying to infiltrate the target organization by exploiting a trusted component or software within the supply chain, thereby circumventing traditional security measures and catching victims off guard.
\ The Okta breach in 2023 is a glaring example of third-party risk. In this instance, a hacking group executed a supply chain attack targeting Okta’s customers rather than Okta itself, exposing several financial institutions, including Western Union, Ally, and Amalgamated Bank, to potential threats.
\ In the face of constantly evolving tactics, organizations must remain vigilant against threat actors. How? By giving far more budget and attention to the programs overseeing these third-party relationships.
In light of this threat, companies have no choice but to strengthen their TPRM programs. This requires analyzing the risks posed by working with outside services, engaging with vendors to assess their security posture, and remediating any identified weaknesses. And, if push comes to shove, companies need to delay deployment until the resulting security issues are addressed.
\ Risk tolerance, vendor criticality, and compliance requirements should then guide organizations on whether it’s safe to onboard the vendor or find alternative solutions. If companies decide to proceed with the partnership, it’s important to keep an eye on their security and compliance with regular checkups. After all, these partners now have access to internal systems and sensitive data to deliver their services (the exact information hackers are targeting).
\ Alarmingly, more than 15% of businesses bypass this process and don’t look into how or if partners protect data. This just isn’t good enough. In this day and age, with known risks and increasing cyberattacks, the buck stops with logistics companies. It’s up to them to define their third-party risk tolerance, ensure a reliable method for handling such risks, and create a system for continually assessing and monitoring the security of the partnership.
Of course, this isn't to say third-party platforms and partners are without value. They can be important external resources that take the pressure off internal teams with time-consuming or technical tasks. But, and it's worth reporting, these partnerships should be entered into with a healthy dose of caution.
\ Therefore, logistics can no longer afford to treat third-party cybersecurity as an afterthought. Increasingly, it's just as important as internal defenses. This demands investing in better internal and external security as well as better partner vetting. If companies aren't up to standard, don't take the risk. It's that simple.
\ Additionally, train your staff with regular security seminars and workshops. Employees are the first line of defense and can be your eyes and ears on the ground. If there's something wrong on the backend, or partner profiles are acting strangely, they can see and say something. Help them help you.
\ The takeaway here is to take matters into your own hands. Instead of waiting for clients or customers to discover breaches, supply chain companies today must arm themselves with the necessary tools and training to detect and respond immediately. Only by championing a holistic, vigilant approach can supply chain companies weed out poor partners and protect themselves from this underappreciated threat.
2025-01-30 18:39:17
It’s early 2025, and we may already be witnessing a redefining moment for AI as we’ve come to know it in the last couple of years. Is the canon of “more GPUs is all you need” about to change?
\ What an unusual turn of events. First, the Stargate Project. The joint venture created by OpenAI, SoftBank, Oracle, and investment firm MGX is aiming to invest up to US$500 billion in AI infrastructure in the United States by 2029.
\ Arm, Microsoft, Nvidia, Oracle, and OpenAI are the key initial technology partners in what has been dubbed “the Manhattan project of the 21st century”, with direct support from the US administration. President Donald Trump called it “the largest AI infrastructure project in history”.
\ The list of leading US-based technology partners in the project and the vast investment in what has been a strategic initiative for the US – AI infrastructure to secure leadership in AI – is what’s driving the parallelism to the Manhattan project.
\ Both AI chip makers in the list – Arm and Nvidia – are led by CEOs of Taiwanese origins. That is notable, considering Taiwan’s ongoing tense relations with China, and the fact that the Stargate Project is the latest in a lineage of recent US policies aiming to invigorate domestic AI infrastructure and know-how while imposing limitations to the rest of the world, primarily China.
\ However, none of that mattered for the market, which sent Nvidia’s stock soaring for yet another time in the last couple of years at the announcement of the Stargate Project. But that was all before the release of DeepSeek R1.
\ DeepSeek R1 is a new open-source reasoning model, released just days after the announcement of the Stargate Project. The model was developed by the Chinese AI startup DeepSeek, which claims that R1 matches or even surpasses OpenAI’s ChatGPT o1 on multiple key benchmarks but operates at a fraction of the cost.
\ What is remarkable about DeepSeek R1 is that it has been developed in China, despite all the restrictions on AI chips meant to hamper the ability to make progress on AI. Does that mean that the OpenAI- and US-centric conventional wisdom of “more GPUs is all you need” in AI is about to be upended?
Truth is, when we arranged a conversation on AI chips with Chris Kachris a few days ago, neither the Stargate Project nor DeepSeek R1 had burst onto the AI scene. Even though we did not consciously anticipate these developments, we knew AI chips is a topic that deserves attention, and Kachris is an insider.
\ It’s become somewhat of a tradition for Orchestrate all the Things to analyze AI chips and host insights from experts in the field, and the conversation with Kachris is the latest piece in this tradition.
\ Chris Kachris is the founder and CEO of InAccel. InAccel that helps companies speedup their applications using hardware accelerators in the cloud easier than ever. He is also a widely cited researcher with more than 20 years of experience on FPGAs and hardware accelerators for machine learning, network processing and data processing.
https://www.youtube.com/watch?v=BkYhBOFqkwY&embedable=true
After InAccel was recently acquired by Intel, Kachris went back to research, currently working as an Assistant Professor in the Department of Electrical and Electronics Engineering at the University of West Attica.
\ When setting the scene for the conversation with this timely news, Kachris’ opening remark was that innovation in AI chips is an “expensive sport”, which is why it mostly happens in the industry as opposed to academia. At the same time, however, he noted that the resources needed does not only come down to money, but that also entails talent and engineering.
\ For Kachris, US policies have been on the right track in terms of their aim to repatriate expertise and make the country self-sufficient. Being a European citizen, he also called for the EU to apply similar initiatives, along many voices calling for the EU to step up its GPU game. Would looking at how DeepSeek’s success was achieved, however, have anything to teach us?
According to the “Generative AI in the BRICS+ Countries” report, unlike other BRICS countries, China uses both foreign graphics cards (via the cloud and in its own data centers) and local cards made by Chinese companies.
\ Currently, there are more than 10 companies in China that are developing their own graphics cards, and the process of switching to local GPUs after using NVIDIA is reportedly not difficult for Chinese companies.
\ It seems like in order to stay competitive in the AI race, nations will have to reconsider their options, potentially borrowing pages from China’s playbook. Kachris concurred that China has been progressing in leaps and bounds, first imitating and then developing innovative techniques of its own.
\
“They can mix and match. They can combine different versions of GPUs and other processing units in order to create a powerful data center or cloud. This is very useful, especially if you think that in the past, you had to buy new equipment every three or four years maybe.
\ Now the innovation is so fast that almost every year, you have more and more powerful chips and more powerful processors. Does it make sense to throw away processors that are one or two years old? So definitely, you need to find a way to utilize resources, even if it is heterogeneous resources. This would be much more cost efficient”, said Kachris.
\ DeepSeek R1’s reported cost to train is a strong argument in support of this approach. In addition to training on heterogeneous infrastructure, DeepSeek’s approach included reducing numerical precision, multi-token reading capability, and applying an intelligent Mixture of Experts technique.
\ The result is slashing training costs from $100 million to around $5 million and reducing hardware needs from 100,000 GPUs to a mere 2,000, making AI development accessible on standard gaming GPUs. What’s more, even if DeepSeek is not 100% open source – whatever that means for LLMs – its process can be replicated.
\
\
:::info
:::
AI Chips and open source AI models are part of the comprehensive Pragmatic AI Training.
\ Theory and hands-on labs. All-inclusive retreat. Limited seats cohort.
\ Click here to register for the Pragmatic AI Training
The immediate reaction to the news was a selloff rally, with Nvidia’s stock going down 17% following the news. The market has already started course-correcting at the time of writing, with both the downwards and upwards trends being somewhat predictable.
\ On the one hand, what DeepSeek demonstrated was that there is lots of room for efficiency gains in training top performing AI models, actively undermining the conventional wisdom. On the other hand, that doesn’t mean that Nvidia isn’t still the leader, and we can expect to see Jevon’s paradox in action once again.
\ Nvidia kept the pace of innovation in 2024, announcing and subsequently shipping its latest Blackwell architecture, expanding its ecosystem and hitting multiple financial and business milestones. Kachris highlighted that Nvidia is not just selling chips anymore, but they’ve moved towards vertical integration of their NVLink technology with their chips on the DGX platform.
\ But Nvidia GPUs are not the only game in town. AMD on its part announced a new AI accelerator, the Instinct MI325X. As Kachris noted, the MI300 series is very powerful, featuring specialized units to accelerate transformers – a key architecture for Large Language Models. AMD’s growth is purportedly driven by data center and AI products.
\ The vast majority of people and organizations will be AI users, not AI builders. For them, using or even building AI applications is not really a matter of training their own model, but rather using or fine-tuning a pre-trained model.
\ Kachris also called out Intel’s progress with Gaudi. Despite the high performance capabilities of Gaudi 3, however, Intel seems to be behind in terms of market share, largely due to software. At the same time, Intel is making moves to sell its FPGA unit, Altera.
\ FPGAs, Kachris maintains, may not be the most performant solution for AI training, but they make lots of sense for inference, and this is where there is ample room for competition and innovation. It’s precisely this – building a software layer to work with FPGAs – that InAccel was working on, and what led to the acquisition by Intel.
\ Naturally, Kachris emphasized the importance of the software layer. At the end of the day, even if a chip has superior performance, if it’s not easy to use for developers via the software layer, that is going to hinder adoption. Nvidia maintains the significant advantage on the software layer due to its ubiquitous CUDA stack, which it keeps investing in.
\ The rest of the industry, led by Intel via the UXL Foundation / OneAPI initiative, is making efforts to catch up. AMD has its own software layer – ROCm. But catching up is not going to happen overnight. As Kachris put it, the software layer has to enable using the hardware layer without changing a single line of code.
\ Nvidia is ramping up its inference and software strategy too with its newly released NIM framework, which seems to have gained some adoption. The competition is also focusing on inference. There’s a range of challengers such as Groq, Tenstorrent, GraphCore, Cerebras and SambaNova, vying for a piece of the inference market pie.
While DeepSeek is a prominent display of the benefits of optimization, it’s also not the only one. Kachris was involved in a recent comprehensive survey and comparison of hardware acceleration of LLMs, with many of those geared towards inference.
\ One way to go about it is to do this via AI provider APIs – typically OpenAI or Anthropic. For more sophisticated use cases, however, for reasons having to do with privacy, compliance, competitive advantage, application requirements or cost, end users will want to deploy AI models on their own infrastructure.
That may include a whole range of environments, ranging from on premise and private cloud to edge and bare metal. Especially with LLMs, there is even the option to run them locally on off the shelf machines. We asked Kachris whether he believes that local / edge deployment of LLMs makes sense.
\ Kachris noted that inference may work with “shrinked”, aka quantized versions of AI models. Research suggests that even 1-bit versions of models are viable. Kachris pointed out that even though there are specialized hardware architectures, from the ones broadly available GPUs and FPGAs provide the best performance, with FPGAs being more energy efficient.
As far as future developments go, Kachris highlighted in-memory computing as an area to keep an eye on. The main idea is being able to combine storage and compute on the same unit, thus eliminating the need for data transfer and leading to better performance. That is inspired by the way biological neural networks work, and is referred to as neuromorphic computing.
\ There are more areas of noteworthy developments, such as chiplets, specialized chips tailored for the transformer architecture that powers LLMs, photonic technology and new programming languages for AI.
\ In terms of more short to mid term prospects, and the question of whether there is room for innovation in an Nvidia-dominated world, Kachris believes that embedded systems and Edge AI represent an opportunity for challengers:
\
“There are different requirements and different specifications in the domain of Edge AI. I think there is room for innovation in Edge AI, for example in video applications for hospitals, or autonomous driving and aviation.
\
I think it’s going to happen. Let’s talk about GPUs. So NVIDIA is the leader in GPUs, but there was a lack of GPUs for wearable devices. And we saw a great company, Think Silicon, stepping up and developing a GPU that is specialized for fit bands or smartwatches, and then being acquired by Applied Materials.
\ Innovation is going to happen in areas that are too small for companies like Nvidia or Intel, but good enough for smaller companies that can make specialized products”.
\
:::info
Stories about how Technology, Data, AI and Media flow into each other shaping our lives.
Analysis, Essays, Interviews and News. Mid-to-long form, 1-3 times per month.
:::
\
2025-01-30 18:21:53
As we head into 2025, many developers find that rich text editors (RTEs) are no longer just small features tucked into larger products. Instead, these editors have become important parts of content management systems, CRMs, productivity platforms, and e-learning solutions.
\ A few years ago, basic text formatting was enough. Now, developers want more customization, better performance, stronger scalability, and even advanced features like AI-driven help. Recent data from CKEditor’s “2024 State of Collaborative Editing” and TinyMCE’s “2024 RTE Survey” shows these new priorities very clearly.
To start, many developers now need rich text editors that fit their unique needs. According to TinyMCE’s 2024 RTE Survey, 52% of developers want full control over their editor’s experience. This number is important because it shows that most developers do not want a “one-size-fits-all” editor. Instead, they want to shape the editor to match their application’s look, feel, and workflow.
\ Some ways developers achieve this include:
Adding or removing toolbar buttons to match the project’s brand and style.
Using flexible APIs to create custom plugins or special formatting features.
Including revision histories or other custom workflows that feel natural for their teams.
\
By having this level of control, developers can make sure their editors feel like a true part of their product, not just a separate tool.
Next, performance has become a top priority. In TinyMCE’s 2024 RTE Survey, 79% of respondents said performance is the most critical factor. Today’s users have high standards. If an editor loads slowly or feels sluggish, they might lose focus or trust.
\ By paying attention to performance, developers can offer a better user experience. They can make sure that, as soon as a user opens the editor, it feels responsive and stable.
As we move forward, many products must handle larger and more spread-out user bases. According to the TinyMCE 2024 RTE Survey, 43% of developers prioritize scalability. This makes sense because many apps now serve global teams and large groups of users, sometimes all at once. Editors must handle:
\
Many users editing documents at the same time.
Real-time changes appearing smoothly for everyone.
Growing workloads as the product becomes more popular.
\
By choosing an RTE that can scale without breaking, developers can trust that their editor won’t slow down or fail as more users rely on it. In the end, scalability means fewer headaches when traffic increases or when a project grows in complexity.
Everyone expects basic formatting features like bold, italics, and headings. In fact, TinyMCE’s 2024 RTE Survey shows that 88% of developers consider these core features a given. But basic formatting alone will not set an editor apart anymore.
\ Today, developers are looking for ways to enhance the writing process. Some are interested in AI-driven tools, such as:
\
Predictive text suggestions that help users write faster.
Grammar and spelling checks that improve the quality of content.
Intelligent formatting that adjusts style automatically.
\
With these new features, the editor becomes more than just a text box. It starts to feel like a smart assistant, guiding users and helping them produce better results. This aligns with a finding in Cloudinary’s 2023 State of Visual Media Report: “68% of developers believe that AI’s main benefit is enabling productivity and efficiency.”
\ For instance, Froala integrates with Gemini and other popular generative AI tools to improve readability, SEO, and content quality. Making such features accessible in a rich text editor’s toolbar helps users produce top-notch content in significantly less time. Because of this, the most advanced RTEs have gone beyond basic formatting, making them a core part of modern applications.
In the past, some teams saw RTEs as extras, but that is no longer the case. CKEditor’s “2024 State of Collaborative Editing” report found that 71% of respondents consider RTEs critical to their platforms. This shows a big change.
\ Developers now treat these editors as key building blocks. For example:
\
In a CMS, a well-designed RTE lets marketing teams update content without needing a developer for every small change.
In a productivity suite, the RTE supports collaboration by letting multiple people edit the same document at once, track changes, and comment.
In an e-learning platform, the RTE can help teachers build lessons, quizzes, and discussions that include rich media and advanced formatting.
\
Because these scenarios depend on a reliable editor, choosing the right one is a big decision.
When we think about these trends, it is easy to see them in real products. Many developers remember times when a slow editor frustrated writers or when a lack of custom features forced the team to find odd workarounds. On the other hand, a good, flexible editor can make everyone’s job easier.
\ For example, a CMS might use a customizable editor that matches a company’s brand and makes sure that authors can create content without needing a technical person. A collaboration tool might rely on an RTE that loads fast enough to keep everyone’s ideas flowing smoothly. An e-learning platform might use an RTE that handles tables, images, videos, and special formatting to keep students engaged.
Some tools already fit these new needs without shouting about it. For example, an editor like Froala stays light and easy to load while still offering ways to add custom features. Furthermore, it scales nicely and works well with popular languages and frameworks.
\ According to the 2024 Stack Overflow Development Survey, JavaScript continues to dominate as the most popular programming language. With 62.3% of developers using JS, 39.8% desiring it, and 58.3% admiring it in 2024, it’s here to stay. This highlights the importance of using rich text editors that integrate into diverse technologies, especially widely used ones. Such versatility allows developers to quickly adapt to new requirements, such as changes in tech stacks, promoting scalability.
\ A lean tool like Froala can help developers meet their goals without slowing them down. Even if it is not the only choice, it represents the kind of editor that developers now look for—something that does not get in the way but instead supports growth and new ideas.
The data from these surveys is hard to ignore. More than half of developers want deep customization. Nearly four-fifths put performance first. Almost half focus on scalability. Also, a large majority expect at least some baseline formatting, and many want even more advanced features, including AI-driven help.
\ These changes show that choosing the right RTE is more important than ever. Today’s developers need an editor that fits into their workflow, loads quickly, scales easily, and offers a path to smarter features. By paying attention to these factors, teams can pick an editor that feels like part of the product’s core, not just another add-on.
\ References
CKEditor. "The 2024 State of Collaborative Editing." CKSource, a Tiugo Technologies Company. \n https://ckeditor.com/insights/collaboration-survey-report-2024/
\ TinyMCE. "The 2024 RTE Survey." Tiny Technologies. \n https://www.tiny.cloud/blog/announcing-the-2024-state-of-rich-text-editors-report
\ 2024 Stack Overflow Developer Survey. \n https://survey.stackoverflow.co/2024/
\ Cloudinary. “2023 State of Visual Media Report.” \n https://cloudinary.com/state-of-visual-media-report
\ Text Editor Market Insights Report. "Global Text Editor Market Overview and Trends." Verified Market Reports. \n https://www.verifiedmarketreports.com/product/text-editor-market/
\ Froala Official Site. "Why Froala? Key Features and Benefits." \n https://www.froala.com
2025-01-30 17:59:14
One day, you wake up and decide you can't live without TypeScript in your legacy JavaScript project anymore. That's what happened to me and my сolleagues, too, and now I want to tell you about our experience.
\ About the project:
\
The longer you develop an app, the more complex it becomes. With time, we realized that we couldn't ignore the following issues anymore:
\
\ Why TypeScript? Besides the obvious benefits, such as handy static typing, a self-documented codebase, and enhanced code reliability, we found a few nice advantages for our project specifically:
\ TypeScript could help us:
\
These benefits were enough to convince the development team to adopt TS, but we still had to convince the management team of the reason to spend money on implementing TypeScript. The points above didn't seem convincing enough to explain what management would get from this transition. Well, we had some arguments here, too.
\ TypeScript could help the management team:
\
\ As a result, we managed to get approval and started the task!
First, we needed to install TypeScript as a dev dependency:
\
npm install typescript
\
Also, we needed a basictsconfig.json
file to make it work inside the project as recommended:
{
"compilerOptions": {
"target": "ES2022",
"module": "NodeNext",
"strict": true
}
}
\ Of course, it wasn't that easy, right? Let's see what issues we had.
\
Since we're using Webpack in our project, we had to show it how to approach TS files. To do this, we added a ts-loader
package:
npm install --save-dev ts-loader \n and added parameters to the Webpack config file:
\
module.exports = {
module: {
rules: [{
test: /\\.ts$/,
exclude: /node_modules/,
use: 'ts-loader'
}]
}
}
\ Also, there was a need to tell our linter to check TS files, too:
module.exports = {
plugins: [
new ESLintPlugin({
extensions: './*.{js,ts}',
files: './*.{js,ts}'
})
]
}
\ Since we are using jQuery in our project (jQuery is still alive in 2025!), we needed types for jQuery:
npm install --save-dev @types/jquery
\ Voila! TypeScript was now working in our project. But what next? Our project still completely consisted of JS. How did we transition it to TS?
We couldn't afford to convert all the JS files to TS simultaneously, so we decided to do this gradually. We chose the following strategy:
\
We used TypeScript for newly created files \n
Since the project is still being developed, components are getting redesigned and constantly refactored, so we areexpected
to create new files.
\
We converted existing JS to TS when possible \n
Whenever we had the rare opportunity to take a pause from developing features, we dedicated it to technical improvements like rewriting entire files to TS.
\ This is still a work in progress, but we have some results after 1 year of the transition: \n
JavaScript to TypeScript ratio in the project \n
Let's compare the number of lines of code for both JS and TS inside the project: \n \n JS: 9190 total
(76.5%) \n \n TS: 2812 total
(23.5%) \n \n Not bad so far! \n
Decreased backlog \n
For the first time in many years, we managed to resolve most of our backlog, which, of course, is a result of different parameters, but TypeScript still contributed to this achievement. \n
The Dev team acquired new skills \n
This process was also quite fun for some of our developers who weren't very familiar with TypeScript. It kept them entertained and motivated to reach for new heights in their professional lives!
\
I collected some thoughts from my teammates on their current experience with TS, especially from those who hadn't used it or other typings in programming before: \n
\ If you've read the article to this point, this is your sign to put some TypeScript in your legacy project! It's worth it and not that painful after all.