2026-04-20 10:42:37
Com o crescimento das aplicações web e a necessidade de sistemas distribuídos mais escaláveis e interoperáveis, surgiram padrões arquiteturais que visam organizar e padronizar a comunicação entre sistemas. Nesse contexto, a arquitetura REST (Representational State Transfer) se destaca como um dos modelos mais utilizados na construção de APIs modernas³.
A arquitetura REST não é um protocolo, mas sim um conjunto de princípios que orientam o desenvolvimento de sistemas baseados em recursos, utilizando o protocolo HTTP como meio de comunicação¹. Sua adoção tem se tornado cada vez mais comum devido à simplicidade, flexibilidade e eficiência que proporciona na integração entre diferentes aplicações.
O presente artigo tem como objetivo apresentar os principais conceitos da arquitetura REST, seus princípios fundamentais e boas práticas de implementação, evidenciando sua importância no desenvolvimento de software contemporâneo.
Apresentar os conceitos, princípios e boas práticas da arquitetura REST, destacando sua importância no desenvolvimento de sistemas distribuídos.
2.1 Conceito de Arquitetura Rest
A arquitetura REST foi definida por Roy Fielding como um estilo arquitetural voltado para sistemas distribuídos, baseado na manipulação de recursos por meio de requisições HTTP. Esse modelo tem como principal objetivo promover a integração entre aplicações, permitindo a comunicação eficiente entre diferentes sistemas⁴.
Além disso, REST surgiu como uma forma de orientar a evolução da própria arquitetura da Web, utilizando padrões já consolidados como HTTP e URI, o que contribui para sua ampla adoção e eficiência no desenvolvimento de serviços².
A arquitetura REST é fundamentada em um conjunto de princípios que orientam o desenvolvimento de sistemas distribuídos. Esses princípios visam garantir eficiência, escalabilidade e simplicidade na comunicação entre aplicações, sendo comprovada sua viabilidade em estudos práticos sobre implementação de serviços REST⁴.
Esse princípio define que cliente e servidor devem ser independentes, permitindo que evoluam separadamente. Isso melhora a escalabilidade e facilita a manutenção do sistema³.
No modelo REST, cada requisição deve conter todas as informações necessárias para sua compreensão, sem depender de interações anteriores. Isso reduz a complexidade e melhora a performance do sistema⁵.
A padronização da comunicação entre cliente e servidor é essencial no REST. Isso inclui o uso correto dos métodos HTTP e códigos de status, garantindo previsibilidade e consistência³.
A arquitetura REST permite o uso de cache para armazenar respostas, reduzindo a necessidade de novas requisições ao servidor e melhorando o desempenho².
Os sistemas podem ser organizados em camadas, onde cada uma possui responsabilidades específicas, aumentando a segurança e a escalabilidade da aplicação².
A aplicação correta da arquitetura REST envolve a adoção de boas práticas que garantem qualidade e eficiência no desenvolvimento. A utilização adequada desses princípios contribui para a construção de sistemas mais simples, eficientes e alinhados com os padrões da Web moderna².
Os endpoints devem ser claros, objetivos e representar corretamente os recursos da aplicação, facilitando o entendimento e uso da API³.
Cada método HTTP deve ser utilizado conforme sua finalidade, como GET para consulta, POST para criação, PUT para atualização e DELETE para remoção¹.
O versionamento permite manter a compatibilidade com diferentes versões da API, evitando impactos em sistemas já integrados⁵.
A utilização de mecanismos de autenticação e autorização é fundamental para proteger os dados e garantir o acesso controlado às informações⁵.
A arquitetura REST se consolidou como um dos principais padrões para desenvolvimento de sistemas distribuídos, especialmente no contexto de APIs web. Seus princípios promovem organização, escalabilidade e facilidade de manutenção.
Além disso, a adoção de boas práticas contribui diretamente para a qualidade das aplicações, tornando-as mais seguras, eficientes e compreensíveis.
De maneira análoga a isso, o conhecimento sobre REST é essencial para profissionais da área de Engenharia de Software, sendo um diferencial importante no desenvolvimento de soluções modernas.
[1] DEVMEDIA. Conhecendo o modelo arquitetural REST. Disponível em: https://www.devmedia.com.br/conhecendo-o-modelo-arquitetural-rest/28052. Acesso em: 15 abr. 2026.
[2] JR., Elemar. Fundamentos para sistemas com arquiteturas REST. Disponível em: https://elemarjr.com/livros/arquiteturadesoftware/volume-1/fundamentos-para-sistemas-com-arquiteturas-rest/. Acesso em: 15 abr. 2026.
[3] FERREIRA, Rodrigo. REST: Princípios e boas práticas. Disponível em: https://www.alura.com.br/artigos/rest-principios-e-boas-praticas?srsltid=AfmBOorxh1xRI6-B5hZPHy-5VkSybbel7i8MHbQdzXKIhj5TKdMxQcxR. Acesso em: 15 abr. 2026.
[4] RIBEIRO, M. F.; FRANCISCO, R. E.. Web services REST: conceitos, análise e implementação. Disponível em: https://publicacoes.ifba.edu.br/index.php/etc/article/view/25. Acesso em: 15 abr. 2026.
[5] TOTVS, Equipe. Arquitetura REST: saiba o que é e seus diferenciais. Disponível em: https://www.totvs.com/blog/developers/rest/. Acesso em: 15 abr. 2026.
2026-04-20 10:41:00
Australia-Bali fares shift constantly. Melbourne (MEL) to Denpasar (DPS) ranges A$420-550 return; Brisbane (BNE) sits at A$380-520. One-way fares from budget carriers start around A$180, but add A$40-70 per leg for 20 kg bags and the real cost climbs. Manually checking fares across a 14-day travel window for 5 Australian cities is impractical.
You need an API. Here is an honest look at the three real options.
| Criteria | SerpApi | Amadeus Self-Service | Scraping |
|---|---|---|---|
| Setup time | 15 min | 2-4 hr (IATA onboarding) | 30 min |
| Data accuracy | Google Flights mirror | Authoritative GDS data | Variable |
| Rate limits | Plan-based | 2000 req/month free tier | Site-dependent |
| Cost | Paid (free tier limited) | Free test; paid production | Infrastructure only |
| Legality | Clear ToS | Full commercial licence | Grey area |
| Bali route coverage | Strong | Strong | Carrier-dependent |
import requests
def serpapi_per_dps(depart: str, return_date: str) -> list:
params = {
"engine": "google_flights",
"departure_id": "PER",
"arrival_id": "DPS",
"outbound_date": depart,
"return_date": return_date,
"currency": "AUD",
"hl": "en",
"api_key": "YOUR_KEY",
}
r = requests.get("https://serpapi.com/search", params=params, timeout=15)
r.raise_for_status()
flights = r.json().get("best_flights", [])
return [{"price": f["price"], "duration": f["total_duration"]} for f in flights]
# Check a Bali fare for a specific week
results = serpapi_per_dps("2026-03-10", "2026-03-20")
for result in results:
print(f"A${result['price']} | {result['duration']} min")
SerpApi reflects Google Flights results, not a live GDS feed, so prices may lag 15-30 minutes. Adequate for daily monitoring; insufficient for real-time booking arbitrage.
from amadeus import Client, ResponseError
amadeus = Client(
client_id="YOUR_CLIENT_ID",
client_secret="YOUR_CLIENT_SECRET",
)
try:
response = amadeus.shopping.flight_offers_search.get(
originLocationCode="SYD",
destinationLocationCode="DPS",
departureDate="2026-03-10",
returnDate="2026-03-20",
adults=1,
currencyCode="AUD",
max=10,
)
for offer in response.data:
price = offer["price"]["total"]
print(f"A${price}")
except ResponseError as e:
print(e)
Amadeus pulls from the GDS, so fares are bookable and accurate. The free test environment caps at 2000 API calls per month, which covers a daily multi-date sweep across 5 Australian cities with room to spare.
For a developer wanting to track cheap flights to Bali without IATA accreditation, SerpApi is the pragmatic starting point. It covers all major Australian origins (PER, SYD, MEL, BNE, DRW), returns structured JSON, and takes 15 minutes to integrate.
For a product that needs to display bookable Bali fares, Amadeus is the correct tool. The onboarding friction is real but the data quality justifies it.
Scraping individual carrier sites is fragile. Session tokens, bot detection, and varying page structures across budget carriers make maintenance costly relative to either API option.
If you are building a travel tool, consider that users landing in Bali face a connectivity gap. Ngurah Rai SIM kiosks queue 20-30 minutes post-arrival. Telstra and Optus roaming costs A$10-15 per day. Surfacing a HelloRoam Indonesia eSIM option (from A$3.53 on Telkomsel 5G) at the booking confirmation step solves a real arrival-day friction point for travellers coming from any Australian city.
Q: Which API is best for a Bali fare tracker side project?
A: SerpApi for fastest prototyping. Amadeus if you need fare class depth or plan to scale to production.
Q: Can I scrape Google Flights directly for Bali fare data?
A: Playwright works until Google changes its DOM, which happens regularly. SerpApi is more stable and the ongoing maintenance time for scraping typically exceeds the API subscription cost.
Q: How often should I poll for Bali fare changes?
A: Once daily is sufficient. Airlines batch inventory updates; more frequent polling wastes API quota without surfacing new data.
Q: What date range should I target for cheap 2026 Bali fares from Australia?
A: February through March for lowest fares. October for value combined with dry-season conditions. Avoid mid-June through August and mid-December through January.
Q: Do the APIs return baggage fee data for Bali routes?
A: Amadeus includes fare conditions and baggage data in the fare offer object. SerpApi mirrors Google Flights, which shows baggage policy per fare. Both give more useful pricing context than headline fare alone.
Source: Cheap Flights to Bali from Australia
Ready to stay connected on your next trip? Check out HelloRoam eSIM
2026-04-20 10:40:15
This is a submission for the Weekend Challenge: Earth Day Edition
EcoScore AI is a web application that evaluates your environmental impact in seconds and provides recommendations to improve your sustainability habits.
The goal is not just to calculate a score, but to understand real environmental behavior.
🔗 Live Demo: https://eco-score-frontend.vercel.app/
🔗 GitHub Repository: https://github.com/Andyx1923/eco-score-ai
Angular → Express API → MongoDB → AI (Gemini)
Data is sent to backend
Backend:
These issues helped me understand real-world full-stack development.
This project is designed to go beyond a demo.
My goal is to implement EcoScore AI at the
Technical University of Machala (UTMACH).
By collecting anonymous responses, we can:
With enough data, we can answer:
This could help drive initiatives like:
Currently, recommendations are basic.
The next step is to fully integrate Google Gemini to:
This project aims to qualify for:
👉 Best Use of Google Gemini
This started as a simple Earth Day project.
But it evolved into something bigger:
A tool that could help a real community understand and improve its environmental impact.
And that’s the kind of software I want to build.
2026-04-20 10:32:50
In my first attempt at Prompt Wars, I built a working AI solution.
It was functional. It responded correctly. But it didn’t feel like a real system.
That was the biggest lesson.
So for my second attempt, I changed my approach completely.
The system combines deterministic decision logic with AI reasoning, ensuring reliable outputs without hallucination.
The Shift in Thinking
Instead of asking: “How do I build this feature?”
I started asking: “How would this work in a real production system?”
That one change made everything different.
What I Built
I created an enhanced version of:
Smart Stadium AI Assistant
The goal:
• Reduce crowd congestion
• Help users navigate efficiently
• Provide real-time intelligent suggestions
But this time, the focus was not just output…
It was decision-making.
Core Architecture
I designed the system with clear layers:
User Input
→ Context Builder
→ Real-time Data (Firebase pattern)
→ Historical Data (BigQuery pattern)
→ Decision Engine
→ Vertex AI (reasoning)
→ Final Response
Each component has a role.
This made the system structured and scalable.
Tech Stack
Frontend
HTML5, CSS3 (Glassmorphism UI)
Vanilla JavaScript (no heavy frameworks)
Custom SVG for stadium visualization
Backend
Node.js
Express.js
AI & Decision Layer
Rule-based Decision Engine (custom logic)
Vertex AI (intent classification + reasoning)
Data & Simulation
Firebase (simulated real-time crowd data)
BigQuery (simulated historical analytics)
Deployment
Google Cloud Run (containerized Node.js service)
Google Cloud Build (image build & deployment)
Key Improvements Over Attempt 1
Decision Engine (Game Changer)
Instead of directly generating responses, I added a decision layer.
• Rule-based logic (shortest vs least crowded)
• Priority handling (normal vs high congestion)
• Fallback logic for edge cases
This ensured:
The system decides first, then responds
Real-time + Historical Intelligence
I simulated Google Cloud patterns:
• Firebase → live crowd data
• BigQuery → historical trends
• Vertex AI → reasoning
Now decisions are based on:
current situation + past patterns
Smarter Responses
Responses are no longer generic.
Example:
Recommended: Food Stall 2 (East Gate)
Crowd: Low (12%)
Reason: Nearby stall has 78% congestion
Alternative: Food Stall 1 (closer but high wait time)
This adds:
• clarity
• trust
• intelligence
Edge Case Handling
Real systems don’t break under pressure.
So I handled:
• Extreme crowd spikes (>85%)
• Empty stadium scenarios
• Invalid user inputs
The system adapts instead of failing.
Clean & Modular Code
Instead of one big file, I structured it as:
/engine → logic (context, decision, simulation)
/services → integrations (Firebase, BigQuery, Vertex AI)
/routes → API handling
/public → frontend
This improves:
• readability
• maintainability
• scalability
Deployment
I deployed the application using Google Cloud Run.
Why Cloud Run?
• Stateless architecture
• Auto-scaling
• Simple deployment
This made the system closer to a real-world setup.
Live App:https://smart-stadium-ai-986344078772.asia-south1.run.app/
Biggest Learning
My first version was:
“An AI that answers questions”
This version became:
“A system that makes decisions”
That is a huge difference.
What Could Be Better
• UI can be improved (more realistic map experience)
• More real integrations instead of simulated patterns
• Better performance optimization
But the foundation is now strong.
Final Thoughts
If you are building AI projects, don’t stop at:
• generating responses
• connecting APIs
Focus on:
How decisions are made
That’s what separates:
• demos
from
• real systems
Thanks for reading
If you have suggestions or feedback, feel free to share.
Always learning, always building
2026-04-20 10:32:25
Se você trabalha com engenharia, dados ou qualquer área que gera arquivos e planilhas todo dia, provavelmente já perdeu horas fazendo tarefas que um script de 50 linhas poderia resolver em segundos.
Neste post vou mostrar 3 scripts Python que desenvolvi e uso no dia a dia — todos disponíveis no meu repositório python-automation-scripts.
Sabe aquela pasta de Downloads com 300 arquivos misturados? PDFs, planilhas, imagens, arquivos de AutoCAD, scripts...
O file_organizer.py resolve isso automaticamente:
EXTENSION_MAP = {
'.pdf': 'documentos/pdf',
'.xlsx': 'planilhas',
'.dwg': 'projetos/autocad',
'.py': 'scripts/python',
'.sql': 'scripts/sql',
}
def organize_folder(source_dir, dest_dir, dry_run=False):
stats = {'moved': 0, 'skipped': 0, 'errors': 0}
for file in Path(source_dir).iterdir():
if file.is_file():
category = get_category(file)
dest = Path(dest_dir) / category / file.name
dest.parent.mkdir(parents=True, exist_ok=True)
if not dry_run:
shutil.move(str(file), str(dest))
stats['moved'] += 1
return stats
Rode com dry_run=True primeiro para ver o que será movido sem alterar nada. Simples e seguro.
Recebo planilhas de medição de obras toda semana. Antes eu abria no Excel, filtrava, calculava manualmente. Agora:
from csv_to_summary import summarize, print_summary
summary = summarize('medicao_semana_17.csv')
print_summary(summary)
Saída no terminal:
Coluna: custo_total
Min: 1.250,00
Max: 48.320,00
Média: 12.840,75
Mediana: 9.500,00
Soma: 384.022,50
Nulos: 0
O script detecta automaticamente quais colunas são numéricas — sem configuração.
Não é sobre ser preguiçoso. É sobre direcionar energia para o que importa.
Cada tarefa repetitiva que automatizo libera tempo para resolver problemas reais: arquitetura de sistemas, análise de dados, integração com IA.
"Automate the boring. Build the meaningful."
Estou desenvolvendo mais automações focadas em:
Acesse o repositório completo: github.com/SkinDevX/python-automation-scripts
Se tiver sugestões de scripts ou quiser contribuir, abre uma issue — ficarei feliz em colaborar! 🚀
2026-04-20 10:28:27
Google's Agent Development Kit (ADK) makes it remarkably easy to build multi-agent AI systems. You can wire up an orchestrator agent, connect it to specialized sub-agents, and have a working pipeline in under 100 lines of Python.
What it does not give you — at least not yet — is a compliance layer.
In regulated industries, that gap is the difference between a production deployment and a liability.
ADK provides a clean callback architecture:
before_model_callback — intercept before the LLM sees the promptbefore_agent_callback — intercept at agent invocationbefore_tool_callback — intercept before any tool executesafter_model_callback — intercept after the LLM respondsThese hooks exist precisely for this kind of instrumentation. The framework is well-designed. The gap is not architectural — it is that there is no reference implementation for compliance enforcement using these hooks.
Consider three real scenarios:
Higher Education (FERPA)
An admissions agent handles student data. FERPA requires that every disclosure of student education records be logged (34 CFR § 99.32) and that access be limited to legitimate educational interest (34 CFR § 99.31). Without a compliance layer, an ADK agent has no mechanism to enforce or record either requirement.
Healthcare (HIPAA)
An intake triage agent processes patient queries. HIPAA requires that PHI (Protected Health Information) only be accessed by authorized workforce members under a BAA (Business Associate Agreement). An ADK agent without a compliance hook cannot verify BAA status or create the audit trail required by 45 CFR § 164.312.
Enterprise AI (OWASP Agentic AI Top 10 2026)
OWASP's 2026 Agentic AI Top 10 identifies privilege escalation (ASI02), insufficient audit logging (ASI06), and uncontrolled resource consumption (ASI08) as the top risks in multi-agent systems. An ADK orchestrator that spawns sub-agents without privilege boundaries is exposed to all three.
I built ADKPolicyGuard in the regulated-ai-governance package to provide a drop-in compliance layer for ADK agents.
from regulated_ai_governance.adapters.google_adk_adapter import (
ADKPolicyGuard,
BigQueryAuditSink,
Regulation,
)
from google.adk.agents import LlmAgent
# Define policy — FERPA + OWASP Agentic Top 10
guard = ADKPolicyGuard(
regulations=[Regulation.FERPA, Regulation.OWASP_AGENTIC_TOP10],
audit_sink=BigQueryAuditSink(
project_id="your-gcp-project",
dataset_id="compliance_audit",
table_id="adk_disclosures",
),
rate_limit_rpm=60,
)
# Wire into your ADK agent via callbacks
agent = LlmAgent(
name="student_advisor",
model="gemini-2.0-flash",
before_agent_callback=guard.before_agent_callback,
before_model_callback=guard.before_model_callback,
before_tool_callback=guard.before_tool_callback,
)
Every agent invocation is now covered:
The real value shows up in multi-agent systems. In an Orchestrator → LeadAgent → ApplicantAgent architecture, each agent hand-off is a potential privilege escalation point. ADKPolicyGuard enforces that sub-agents cannot exceed the privilege scope of the orchestrator:
from google.adk.agents import SequentialAgent
orchestrator = SequentialAgent(
name="admissions_orchestrator",
sub_agents=[lead_agent, applicant_agent],
before_agent_callback=guard.before_agent_callback,
)
The guard's before_agent_callback validates each sub-agent invocation against the original identity scope. A sub-agent cannot access data the orchestrator was not authorized to access — privilege escalation is structurally prevented.
Every agent interaction produces a structured compliance record:
{
"event_id": "adk-20260418-001",
"agent_name": "student_advisor",
"regulation": "FERPA",
"decision": "ALLOWED",
"identity": {"user_id": "stu_001", "role": "student"},
"tool_calls": ["get_transcript", "check_financial_aid"],
"timestamp": "2026-04-18T10:30:00Z",
"rate_limit_remaining": 58,
"owasp_checks": {
"ASI01_prompt_injection": "PASS",
"ASI02_privilege_escalation": "PASS",
"ASI06_audit_logging": "PASS"
}
}
This record goes directly to BigQuery for compliance reporting, incident investigation, and regulatory audit response.
ADKPolicyGuard is a compliance enforcement layer, not an authentication system. Your application must establish the authenticated identity context before the agent runs. The guard enforces the scope; your auth layer establishes it.
It also does not replace your legal counsel's review of how your specific deployment maps to applicable regulations.
pip install regulated-ai-governance
from regulated_ai_governance.adapters.google_adk_adapter import ADKPolicyGuard, Regulation
examples/42_google_adk_ferpa_agent.py through 45_google_adk_hipaa_agent.py
If you are building ADK agents for healthcare, education, or any regulated environment and want to discuss the compliance architecture, open an issue or connect with me directly.