2025-12-10 06:41:11
\ Security in distributed systems is often a game of layers. We secure the transport (TLS), we secure the infrastructure (firewalls, VPCs), and we secure the application access (Voters, ACLs). But when it comes to the Messenger component — the beating heart of many modern Symfony applications — there has always been a subtle gap. Once a message leaves your producer and sits in a queue (Redis, RabbitMQ, SQS), it is essentially a serialized string payload. If an attacker gains write access to your transport, they can inject malicious messages, modify payloads, or replay old commands.
\ In Symfony 7.4, the Messenger component introduces a native, robust mechanism to close this gap: Message Signing.
\ This feature ensures that every message processed by your handlers was indeed produced by your application and has not been tampered with in transit. It brings cryptographic integrity to your message bus with the elegance and developer experience you expect from Symfony.
\ In this deep dive, we will explore why this matters, how the new signing architecture works, and provide a complete implementation guide using the libraries available today to replicate this “future” standard.
Imagine a standard e-commerce architecture:
ProcessPayment message.\
If an attacker compromises the RabbitMQ instance, they could manually publish a ProcessPayment message for a fake order or modify the amount of a legitimate order. The worker, blindly trusting the serializer, would hydrate the object and execute the handler.
\ Prior to Symfony 7.4, solving this required writing custom Middleware to wrap messages in a “Signed Envelope,” calculating HMACs, and managing serialization groups carefully. It was boilerplate-heavy and prone to implementation errors (like signing the object before serialization rather than the serialized payload).
The new Signing capability in Symfony 7.4 integrates directly into the Middleware and Serializer chain. It introduces a mechanism to control integrity requirements on a per-handler or per-message basis.
\ The core philosophy is simple: Sign the serialized payload, not the object.
\ The Dispatch Flow:
kernel.secret (or a dedicated signing key).\ The Consumption Flow:
InvalidMessageSignatureException before the serializer even attempts to hydrate the object.Since we are acting as early adopters (or polyfilling this for current versions like 7.1/7.2), we will build this feature using the standard symfony/messenger and symfony/serializer libraries. This implementation mirrors the functionality described in the Symfony 7.4 release notes.
Ensure you have the Messenger component installed.
\
composer require symfony/messenger symfony/serializer symfony/uid
The metadata for the signature needs to travel with the message. In Messenger, we use Stamps. We need a stamp to hold the signature hash and the algorithm used.
\
// src/Messenger/Stamp/SignatureStamp.php
namespace App\Messenger\Stamp;
use Symfony\Component\Messenger\Stamp\StampInterface;
final class SignatureStamp implements StampInterface
{
public function __construct(
private string $signature,
private string $algorithm = 'sha256'
) {}
public function getSignature(): string
{
return $this->signature;
}
public function getAlgorithm(): string
{
return $this->algorithm;
}
}
We need a dedicated service to handle the cryptographic heavy lifting. This keeps our middleware clean and allows us to rotate keys or change algorithms easily. We will use PHP’s native hash_hmac for this example, utilizing the kernel.secret as the key.
\
// src/Service/MessageSigner.php
namespace App\Service;
use Symfony\Component\DependencyInjection\Attribute\Autowire;
class MessageSigner
{
public function __construct(
#[Autowire('%kernel.secret%')]
private string $secret
) {}
public function sign(string $messageBody): string
{
return hash_hmac('sha256', $messageBody, $this->secret);
}
public function verify(string $messageBody, string $signature): bool
{
$expected = $this->sign($messageBody);
return hash_equals($expected, $signature);
}
}
This is the core of the feature. To ensure we are signing the exact string that enters the transport, we decorate the Serializer. This guarantees that any modification to the JSON string in the queue will break the signature.
\
// src/Serializer/SignedMessageSerializer.php
namespace App\Serializer;
use App\Messenger\Stamp\SignatureStamp;
use App\Service\MessageSigner;
use Symfony\Component\Messenger\Envelope;
use Symfony\Component\Messenger\Exception\MessageDecodingFailedException;
use Symfony\Component\Messenger\Transport\Serialization\SerializerInterface;
class SignedMessageSerializer implements SerializerInterface
{
private const HEADER_SIGNATURE = 'X-Message-Signature';
public function __construct(
private SerializerInterface $inner,
private MessageSigner $signer
) {}
public function decode(array $encodedEnvelope): Envelope
{
// 1. Check for signature header
if (!isset($encodedEnvelope['headers'][self::HEADER_SIGNATURE])) {
// If no signature is present, we pass it to the inner serializer.
// We will enforce the requirement for a signature later in Middleware.
return $this->inner->decode($encodedEnvelope);
}
$signature = $encodedEnvelope['headers'][self::HEADER_SIGNATURE];
$body = $encodedEnvelope['body'];
// 2. Verify Integrity
if (!$this->signer->verify($body, $signature)) {
// Throwing here prevents object hydration, blocking the attack immediately.
throw new MessageDecodingFailedException('Invalid message signature. The message may have been tampered with.');
}
// 3. Decode normally
$envelope = $this->inner->decode($encodedEnvelope);
// 4. Add the stamp so handlers know it was verified
return $envelope->with(new SignatureStamp($signature));
}
public function encode(Envelope $envelope): array
{
// 1. Encode normally
$encoded = $this->inner->encode($envelope);
// 2. Compute signature of the BODY
$signature = $this->signer->sign($encoded['body']);
// 3. Add to headers
$encoded['headers'][self::HEADER_SIGNATURE] = $signature;
return $encoded;
}
}
We need to decorate the default messenger serializer.
\
# config/services.yaml
services:
App\Serializer\SignedMessageSerializer:
decorates: 'messenger.transport.native_php_serializer' # Check your messenger config for the ID used
arguments:
$inner: '@.inner'
\
If you are using the default Symfony serializer, the ID is usually messenger.transport.symfony_serializer. You might need to alias this in your messenger.yaml transport config to point to your new signed serializer.
\
# config/packages/messenger.yaml
framework:
messenger:
serializer:
default_serializer: App\Serializer\SignedMessageSerializer
transports:
async:
dsn: '%env(MESSENGER_TRANSPORT_DSN)%'
serializer: App\Serializer\SignedMessageSerializer
#[Signed] AttributeIn the Symfony 7.4 spirit, we want to control this with Attributes. We might want to enforce that specific handlers only accept signed messages, allowing for a gradual rollout.
\
// src/Attribute/Signed.php
namespace App\Attribute;
use Attribute;
#[Attribute(Attribute::TARGET_CLASS | Attribute::TARGET_METHOD)]
class Signed
{
}
\ Now, we need a Middleware to enforce this attribute. Even though the Serializer handles the verification, we need to ensure that a handler rejects messages that arrived without a signature at all.
\
// src/Middleware/EnforceSignatureMiddleware.php
namespace App\Middleware;
use App\Attribute\Signed;
use App\Messenger\Stamp\SignatureStamp;
use Symfony\Component\Messenger\Envelope;
use Symfony\Component\Messenger\Middleware\MiddlewareInterface;
use Symfony\Component\Messenger\Middleware\StackInterface;
use Symfony\Component\Messenger\Exception\UnrecoverableMessageHandlingException;
class EnforceSignatureMiddleware implements MiddlewareInterface
{
public function handle(Envelope $envelope, StackInterface $stack): Envelope
{
$message = $envelope->getMessage();
$reflection = new \ReflectionClass($message);
// Check if the message class has the #[Signed] attribute
if ($reflection->getAttributes(Signed::class)) {
$stamp = $envelope->last(SignatureStamp::class);
if (!$stamp) {
throw new UnrecoverableMessageHandlingException(sprintf(
'Message of type "%s" requires a signature, but none was found.',
get_class($message)
));
}
}
return $stack->next()->handle($envelope, $stack);
}
}
\
Register the middleware in messenger.yaml:
\
framework:
messenger:
buses:
default:
middleware:
- App\Middleware\EnforceSignatureMiddleware
Let’s look at how we use this in a Payment processing scenario.
We mark the message as requiring a signature.
\
// src/Message/ProcessPayment.php
namespace App\Message;
use App\Attribute\Signed;
#[Signed]
class ProcessPayment
{
public function __construct(
public int $orderId,
public float $amount,
public string $currency
) {}
}
The handler doesn’t need to know about the cryptography. It just does its job. If the code reaches here, we know the message is authentic.
\
// src/MessageHandler/PaymentHandler.php
namespace App\MessageHandler;
use App\Message\ProcessPayment;
use Symfony\Component\Messenger\Attribute\AsMessageHandler;
use Psr\Log\LoggerInterface;
#[AsMessageHandler]
class PaymentHandler
{
public function __construct(private LoggerInterface $logger) {}
public function __invoke(ProcessPayment $message): void
{
// This code will ONLY execute if the message had a valid HMAC signature.
$this->logger->info('Processing secure payment', [
'order' => $message->orderId,
'amount' => $message->amount
]);
// ... Payment logic
}
}
To ensure your implementation works, you should verify both success and failure scenarios.
Dispatch a message normally via the bus.
\
// src/Controller/TestController.php
#[Route('/test-sign')]
public function test(MessageBusInterface $bus): Response
{
$bus->dispatch(new ProcessPayment(123, 99.99, 'USD'));
return new Response('Message Dispatched');
}
\
Check: Inspect your Transport (e.g., Doctrine table messenger_messages or Redis). You should see the headers column (or map) contains:
\
{
"X-Message-Signature": "a1b2c3d4...",
"type": "App\\Message\\ProcessPayment"
}
This is the critical test. Manually modify the body of a queued message in your database or Redis.
messenger:consume).amount in the body JSON from 99.99 to 10.00.\ Expected Result:
The worker should throw a MessageDecodingFailedException. The message should be rejected (and sent to the failure transport, if configured).
\
php bin/console messenger:consume async -vv
\ Output:
\
[Critical] Error thrown while handling message ...
MessageDecodingFailedException: Invalid message signature. The message may have been tampered with.
The implementation above uses sha256 and a single shared secret. In a production environment, you might want Key Rotation.
\
You can enhance the MessageSigner to support a keys array:
\
# config/services.yaml
parameters:
messenger.signing_keys:
- '%env(CURRENT_SIGNING_KEY)%'
- '%env(OLD_SIGNING_KEY)%' # Allow verifying with old key during rotation
\
Update the verify method to loop through valid keys:
\
public function verify(string $messageBody, string $signature): bool
{
foreach ($this->keys as $key) {
if (hash_equals(hash_hmac('sha256', $messageBody, $key), $signature)) {
return true;
}
}
return false;
}
Security is rarely about a single silver bullet; it is about reducing surface area and removing assumptions. By implementing message signing, we remove the dangerous assumption that our transport layer is inviolable. We transform our consumers from blind executors into skeptical gatekeepers, ensuring that every command processed carries the cryptographic seal of approval from your application kernel.
\ While Symfony 7.4 will bring native, streamlined support for this pattern, the implementation strategies outlined above prove that you don’t need to wait to secure your infrastructure. The tools — Serializer, Messenger, and Middleware — are already in your hands. The only missing piece was the pattern, and now you have it.
\ As distributed systems and microservices become the standard, the integrity of the messages flowing between them becomes just as critical as the code itself. Don’t leave your queues vulnerable to injection or tampering.
Implementing cryptographic security in high-throughput systems requires precision. If you are looking to audit your current Symfony Messenger architecture or need assistance implementing Zero Trust patterns in your distributed application, I would love to hear from you.
\ https://www.linkedin.com/in/matthew-mochalkin/ to discuss how we can secure your Symfony application’s future, one message at a time.
\
2025-12-10 06:17:41
Aseem H. Salim is transforming global retail infrastructure through advanced POS engineering, AI-driven automation, and systems that cut wait times, reduce waste, and enhance payment security. His innovations bridge legacy and modern platforms, bring machine learning to checkout, and power thousands of stores with scalable, data-driven retail intelligence.
2025-12-10 06:00:04
Prior Work and 2.1 Educational Objectives of Learning Activities
3.1 Multiscale Design Environment
3.2 Integrating a Design Analytics Dashboard with the Multiscale Design Environment
5.1 Gaining Insights and Informing Pedagogical Action
5.2 Support for Exploration, Understanding, and Validation of Analytics
5.3 Using Analytics for Assessment and Feedback
5.4 Analytics as a Potential Source of Self-Reflection for Students
Discussion + Implications: Contextualizing: Analytics to Support Design Education
6.1 Indexicality: Demonstrating Design Analytics by Linking to Instances
6.2 Supporting Assessment and Feedback in Design Courses through Multiscale Design Analytics
\
We took a Research through Design approach and created a research artifact to understand the implications of AI-based multiscale design analytics, in practice. Our study demonstrates the potential of multiscale design analytics for providing instructors insights into student design work and so support their assessment efforts. We focused on supporting users engaged in creative design tasks. Underlying our investigation was our understanding of how multiscale design contributes to teaching and performing these tasks.
\ We develop multiscale design theory to focus on how people assemble information elements in order to convey meanings. The tasks that students perform in the assignments cross fields. Multiscale design tasks are exploratory search tasks, which involve looking up, learning, and investigating [54]. They are information-based ideation tasks, which involve finding and curating information elements in order to generate and develop new ideas as part of creativity and innovation [41, 44]. They are visual design thinking tasks, which involve forming combinations through sketching and the reverse, sketching to generate images of forms in the mind [34]. They are constructivist learning tasks, in which making serves as a fundamental basis for learning by doing [15, 42, 83]. On the whole, multiscale design has roots in diverse fields and, as we see from our initial study, applications in diverse fields. The scopes of intellectual merit and potential broad impact are wide.
\ The present research contributes how to convey the meaning of multiscale design analytics derived using AI, by linking dashboard presentation of design analytics with the actual design work that they measure and characterize. Making AI results understandable by humans is fundamental to building their trust in using systems supported by AI [67]. In our study, when the interface presents what is being measured by AI, it allows users to agree or disagree. Specifically, our integration of the dashboard presentation with the actual design environment allowed instructors to independently validate the particular sets of design element assemblages that the AI determined as nested clusters. This makes the interface to the AI-based analytics visible, or as Bellotti and Edwards said, intelligible and accountable [11]. The importance of making AI decisions visible has been noted in healthcare [18, 80] and criminal justice [26] domains. Likewise, in education, supporting users’ understanding of AI-based analytics is vital, as the measures can directly impact outcomes for an individual. Analytics that do not connect with students’ design work would have little meaning for instructors, if at all. Students, if provided with such analytics, would fail to understand and address the shortcomings that they indicate.
\ Significant implications for future research are stimulated by the current level of investigation of the particular multiscale design analytics in particular situated course context classrooms. We need further investigation of how these as well as new multiscale design analytics affect other design education contexts and design in industry. Such research can investigate the extent to which different analytics and visualization techniques—e.g., indexical representation and animation—are beneficial in specific contexts. Actionable insights on design work can prove vital in improving learners’ creative strategies and abilities, which in turn can stimulate economic growth and innovation [56]. Continued efforts toward simultaneously satisfying the dual goals of AI performance and visibility of decisions—across a range of contexts—has the potential to create broad impacts by providing inroads to addressing complex sociotechnical challenges, such as ensuring reliability and trust [67] in the use of AI systems.
[1] Amina Adadi and Mohammed Berrada. 2018. Peeking inside the black-box: A survey on Explainable Artificial Intelligence (XAI). IEEE Access 6 (2018), 52138–52160.
\ [2] Nancy E Adams. 2015. Bloom’s taxonomy of cognitive learning objectives. Journal of the Medical Library Association: JMLA 103, 3 (2015), 152.
\ [3] Robin S Adams, Tiago Forin, Mel Chua, and David Radcliffe. 2016. Characterizing the work of coaching during design reviews. Design Studies 45 (2016), 30–67.
\ [4] Christopher Alexander. 1964. Notes on the Synthesis of Form. Vol. 5. Harvard University Press.
\ [5] Patricia Armstrong. 2016. Bloom’s taxonomy. Vanderbilt University Center for Teaching (2016).
\ [6] Kimberly E Arnold and Matthew D Pistilli. 2012. Course signals at Purdue: Using learning analytics to increase student success. In Proceedings of the 2nd international conference on learning analytics and knowledge. ACM, 267–270.
\ [7] Yaneer Bar-Yam. 2006. Engineering complex systems: multiscale analysis and evolutionary engineering. In Complex engineered systems. Springer, 22–39.
\ [8] Evan Barba. 2019. Cognitive Point of View in Recursive Design. She Ji: The Journal of Design, Economics, and Innovation 5, 2 (2019), 147–162.
\ [9] Benjamin B Bederson. 2011. The promise of zoomable user interfaces. Behaviour & Information Technology 30, 6 (2011), 853–866.
\ [10] Benjamin B Bederson and Angela Boltman. 1999. Does animation help users build mental maps of spatial information?. In Proceedings 1999 IEEE Symposium on Information Visualization (InfoVis’ 99). IEEE, 28–35.
\ [11] Victoria Bellotti and Keith Edwards. 2001. Intelligibility and accountability: human considerations in context-aware systems. Human–Computer Interaction 16, 2-4 (2001), 193–212.
\ [12] Melanie Birks and Jane Mills. 2015. Grounded theory: A practical guide. Sage.
\ [13] Paulo Blikstein. 2011. Using learning analytics to assess students’ behavior in open-ended programming tasks. In Proceedings of the 1st international conference on learning analytics and knowledge. ACM, 110–116.
\ [14] Benjamin S Bloom et al. 1956. Taxonomy of educational objectives. Vol. 1: Cognitive domain. New York: McKay 20 (1956), 24.
\ [15] Phyllis C Blumenfeld, Elliot Soloway, Ronald W Marx, Joseph S Krajcik, Mark Guzdial, and Annemarie Palincsar. 1991. Motivating project-based learning: Sustaining the doing, supporting the learning. Educational psychologist 26, 3-4 (1991), 369–398.
\ [16] Gabriel Britain, Ajit Jain, Nic Lupfer, Andruid Kerne, Aaron Perrine, Jinsil Seo, and Annie Sungkajun. 2020. Design is (A)live: An Environment Integrating Ideation and Assessment. In CHI Late-Breaking Work. ACM, 1–8.
\ [17] Ann L Brown. 1992. Design experiments: Theoretical and methodological challenges in creating complex interventions in classroom settings. The journal of the learning sciences 2, 2 (1992), 141–178.
\ [18] Adrian Bussone, Simone Stumpf, and Dympna O’Sullivan. 2015. The role of explanations on trust and reliance in clinical decision support systems. In 2015 International Conference on Healthcare Informatics. IEEE, 160–169.
\ [19] Kathy Charmaz. 2014. Constructing grounded theory. Sage.
\ [20] Bo T Christensen and Linden J Ball. 2016. Dimensions of creative evaluation: Distinct design and reasoning strategies for aesthetic, functional and originality judgments. Design Studies 45 (2016), 116–136.
\ [21] Andy Cockburn, Amy Karlson, and Benjamin B Bederson. 2009. A review of overview+ detail, zooming, and focus+ context interfaces. ACM Computing Surveys (CSUR) 41, 1 (2009), 1–31.
\ [22] Deanna P Dannels, Amy L Housley Gaffney, and Kelly Norris Martin. 2011. Students’ talk about the climate of feedback interventions in the critique. Communication Education 60, 1 (2011), 95–114.
\ [23] John Davies, Erik de Graaff, and Anette Kolmos. 2011. PBL across the disciplines: research into best practice. In The 3rd International Research Symposium on PBL. Aalborg: Aalborg Universitetsforlag.
\ [24] Shane Dawson, Leah Macfadyen, F Risko Evan, Tom Foulsham, and Alan Kingstone. 2012. Using technology to encourage self-directed learning: The Collaborative Lecture Annotation System (CLAS). In Australasian Society for Computers in Learning in Tertiatry Education. 246–255.
\ [25] Barbara De La Harpe, J Fiona Peterson, Noel Frankham, Robert Zehner, Douglas Neale, Elizabeth Musgrave, and Ruth McDermott. 2009. Assessment focus in studio: What is most prominent in architecture, art and design? International Journal of Art & Design Education 28, 1 (2009), 37–51.
\ [26] Ashley Deeks. 2019. The Judicial Demand for Explainable Artificial Intelligence. Columbia Law Review 119, 7 (2019), 1829–1850.
\ [27] Erik Duval. 2011. Attention please!: learning analytics for visualization and recommendation. In Proceedings of the 1st international conference on learning analytics and knowledge. ACM, 9–17.
\ [28] Clive L. Dym, Alice M. Agogino, Ozgur Eris, Daniel D. Frey, and Larry J. Leifer. 2005. Engineering Design Thinking, Teaching, and Learning. Journal of Engineering Education 94, 1 (jan 2005), 103–120. https://doi.org/10.1002/j.2168-9830.2005.tb00832.x
\ [29] Charles Eames and Ray Eames. 1968. Powers of ten. Pyramid Films (1968).
\ [30] Vladimir Estivill-Castro and Ickjai Lee. 2002. Multi-level clustering and its visualization for exploratory spatial analysis. GeoInformatica 6, 2 (2002), 123–152.
\ [31] William Gaver. 2012. What should we expect from research through design?. In Proceedings of the SIGCHI conference on human factors in computing systems. 937–946. [32] Dedre Gentner and Albert L Stevens. 2014. Mental models. Psychology Press.
\ [33] John S Gero and Mary Lou Maher. 1993. Modeling creativity and knowledge-based creative design. Psychology Press.
\ [34] Gabriela Goldschmidt. 1994. On visual design thinking: the vis kids of architecture. Design studies 15, 2 (1994), 158–174.
\ [35] William A Hamilton, Nic Lupfer, Nicolas Botello, Tyler Tesch, Alex Stacy, Jeremy Merrill, Blake Williford, Frank R Bentley, and Andruid Kerne. 2018. Collaborative Live Media Curation: Shared Context for Participation in Online Learning. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, 1–14.
\ [36] Meng-Leong How, Sin-Mei Cheah, Yong-Jiet Chan, Aik Cheow Khor, and Eunice Mei Ping Say. 2020. Artificial intelligence-enhanced decision support for informing global sustainable development: A human-centric AI-thinking approach. Information 11, 1 (2020), 39.
\ [37] Hilary Hutchinson, Wendy Mackay, Bo Westerlund, Benjamin B Bederson, Allison Druin, Catherine Plaisant, Michel Beaudouin-Lafon, Stéphane Conversy, Helen Evans, Heiko Hansen, and Others. 2003. Technology probes: inspiring design for and with families. In Proceedings of the SIGCHI conference on Human factors in computing systems. ACM, 17–24.
\ [38] Ajit Jain. 2017. Measuring Creativity: Multi-Scale Visual and Conceptual Design Analysis. In Proceedings of the 2017 ACM SIGCHI Conference on Creativity and Cognition. 490–495.
\ [39] Ajit Jain. 2021. How to Support Situated Design Education through AI-Based Analytics. Ph. D. Dissertation.
\ [40] Ajit Jain, Andruid Kerne, Nic Lupfer, Gabriel Britain, Aaron Perrine, Yoonsuck Choe, John Keyser, and Ruihong Huang. 2021. Recognizing creative visual design: multiscale design characteristics in free-form web curation documents. In Proceedings of the 21st ACM Symposium on Document Engineering. 1–10.
\ [41] Ajit Jain, Nic Lupfer, Yin Qu, Rhema Linder, Andruid Kerne, and Steven M. Smith. 2015. Evaluating tweetbubble with ideation metrics of exploratory browsing. In Proceedings of the 2015 ACM SIGCHI Conference on Creativity and Cognition. 53–62.
\ [42] David H Jonassen. 1994. Thinking technology: Toward a constructivist design model. Educational technology 34, 4 (1994), 34–37.
\ [43] Andruid Kerne, Nic Lupfer, Rhema Linder, Yin Qu, Alyssa Valdez, Ajit Jain, Kade Keith, Matthew Carrasco, Jorge Vanegas, and Andrew Billingsley. 2017. Strategies of Free-Form Web Curation: Processes of Creative Engagement with Prior Work. In Proceedings of the 2017 ACM SIGCHI Conference on Creativity and Cognition. 380–392.
\ [44] Andruid Kerne, Andrew M Webb, Steven M Smith, Rhema Linder, Nic Lupfer, Yin Qu, Jon Moeller, and Sashikanth Damaraju. 2014. Using metrics of curation to evaluate information-based ideation. ACM Transactions on Computer-Human Interaction (TOCHI) 21, 3 (2014), 1–48.
\ [45] Aniket Kittur, Lixiu Yu, Tom Hope, Joel Chan, Hila Lifshitz-Assaf, Karni Gilon, Felicia Ng, Robert E Kraut, and Dafna Shahaf. 2019. Scaling up analogical innovation with crowds and AI. Proceedings of the National Academy of Sciences 116, 6 (2019), 1870–1877.
\ [46] David R Krathwohl. 2002. A revision of Bloom’s taxonomy: An overview. Theory into practice 41, 4 (2002), 212–218.
\ [47] Markus Krause, Tom Garncarz, JiaoJiao Song, Elizabeth M Gerber, Brian P Bailey, and Steven P Dow. 2017. Critique style guide: Improving crowdsourced design feedback with a natural language model. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. 4627–4639.
\ [48] Hajin Lim. 2018. Design for Computer-Mediated Multilingual Communication with AI Support. In Companion of the 2018 ACM Conference on Computer Supported Cooperative Work and Social Computing. 93–96.
\ [49] Lisa-Angelique Lim, Sheridan Gentili, Abelardo Pardo, Vitomir Kovanović, Alexander Whitelock-Wainwright, Dragan Gašević, and Shane Dawson. 2019. What changes, and for whom? A study of the impact of learning analytics-based process feedback in a large course. Learning and Instruction (2019), 101202.
\ [50] Lori Lockyer, Elizabeth Heathcote, and Shane Dawson. 2013. Informing pedagogical action: Aligning learning analytics with learning design. American Behavioral Scientist 57, 10 (2013), 1439–1459.
\ [51] Nic Lupfer, Hannah Fowler, Alyssa Valdez, Andrew Webb, Jeremy Merrill, Galen Newman, and Andruid Kerne. 2018. Multiscale Design Strategies in a Landscape Architecture Classroom. In Proceedings of the 2018 on Designing Interactive Systems Conference 2018. ACM, 1081–1093.
\ [52] Nic Lupfer, Andruid Kerne, Rhema Linder, Hannah Fowler, Vijay Rajanna, Matthew Carrasco, and Alyssa Valdez. 2019. Multiscale Design Curation: Supporting Computer Science Students’ Iterative and Reflective Creative Processes. In Proceedings of the 2019 on Creativity and Cognition. ACM, 233–245.
\ [53] N Lupfer, A Kerne, AM Webb, and R Linder. 2016. Patterns of free-form curation: Visual thinking with web content. Proceedings of the 2016 ACM on Multimedia Conference. http://dl.acm.org/citation.cfm?id=2964303
\ [54] Gary Marchionini. 2006. Exploratory search: from finding to understanding. Commun. ACM 49, 4 (2006), 41–46.
\ [55] Richard E Mayer and Roxana Moreno. 2002. Animation as an aid to multimedia learning. Educational psychology review 14, 1 (2002), 87–99.
\ [56] National Academy of Engineering. 2010. Rising Above the Gathering Storm, Revisited: Rapidly Approaching Category 5. The National Academies Press.
\ [57] Yeonjoo Oh, Suguru Ishizaki, Mark D Gross, and Ellen Yi-Luen Do. 2013. A theoretical framework of design critiquing in architecture studios. Design Studies 34, 3 (2013), 302–325.
\ [58] Jane Osmond and Michael Tovey. 2015. The Threshold of Uncertainty in Teaching Design. Design and Technology Education 20, 2 (2015), 50–57.
\ [59] Antti Oulasvirta, Samuli De Pascale, Janin Koch, Thomas Langerak, Jussi Jokinen, Kashyap Todi, Markku Laine, Manoj Kristhombuge, Yuxi Zhu, Aliaksei Miniukovich, and Others. 2018. Aalto Interface Metrics (AIM) A Service and Codebase for Computational GUI Evaluation. In The 31st Annual ACM Symposium on User Interface Software and Technology Adjunct Proceedings. 16–19.
\ [60] Abelardo Pardo, Jelena Jovanovic, Shane Dawson, Dragan Gašević, and Negin Mirriahi. 2019. Using learning analytics to scale the provision of personalised feedback. British Journal of Educational Technology 50, 1 (2019), 128–138.
\ [61] Ken Perlin and David Fox. 1993. Pad: an alternative approach to the computer interface. In Proceedings of the 20th annual conference on Computer graphics and interactive techniques. 57–64.
\ [62] Yin Qu, Andruid Kerne, Nic Lupfer, Rhema Linder, and Ajit Jain. 2014. Metadata type system: Integrate presentation, data models and extraction to enable exploratory browsing interfaces. In Proc. EICS. ACM, 107–116.
\ [63] Juan Rebanal, Jordan Combitsis, Yuqi Tang, and Xiang’Anthony’ Chen. 2021. XAlgo: a Design Probe of Explaining Algorithms’ Internal States via Question-Answering. In 26th International Conference on Intelligent User Interfaces. 329–339.
\ [64] Katharina Reinecke, Tom Yeh, Luke Miratrix, Rahmatri Mardiko, Yuechen Zhao, Jenny Liu, and Krzysztof Z Gajos. 2013. Predicting users’ first impressions of website aesthetics with a quantification of perceived visual complexity and colorfulness. In Proc. CHI. ACM, 2049–2058.
\ [65] Wojciech Samek, Thomas Wiegand, and Klaus-Robert Müller. 2017. Explainable artificial intelligence: Understanding, visualizing and interpreting deep learning models. arXiv preprint arXiv:1708.08296 (2017).
\ [66] Elizabeth B-N Sanders and Pieter Jan Stappers. 2008. Co-creation and the new landscapes of design. Co-design 4, 1 (2008), 5–18.
\ [67] Ben Shneiderman. 2020. Human-Centered Artificial Intelligence: Reliable, Safe & Trustworthy. International Journal of Human–Computer Interaction (2020), 1–10.
\ [68] Simon Buckingham Shum and Ruth Deakin Crick. 2012. Learning dispositions and transferable competencies: pedagogy, modelling and learning analytics. In Proceedings of the 2nd international conference on learning analytics and knowledge. 92–101.
\ [69] Katrina Sin and Loganathan Muthu. 2015. Application of Big Data in Education Data Mining and Learning Analytics–A Literature Review. ICTACT journal on soft computing 5, 4 (2015).
\ [70] Steven M Smith, Thomas B Ward, and Ronald A Finke. 1995. The creative cognition approach. MIT press.
\ [71] Lucy A Suchman. 1987. Plans and situated actions: The problem of human-machine communication. Cambridge university press.
\ [72] Joshua D Summers and Jami J Shah. 2010. Mechanical engineering design complexity metrics: size, coupling, and solvability. Journal of Mechanical Design 132, 2 (2010).
\ [73] Edward R Tufte, Nora Hillman Goeler, and Richard Benson. 1990. Envisioning information. Vol. 126. Graphics press Cheshire, CT.
\ [74] David Turnbull and Helen Watson. 1993. Maps Are Territories Science is an Atlas: A Portfolio of Exhibits. University of Chicago Press.
\ [75] Barbara Tversky, Julie Bauer Morrison, and Mireille Betrancourt. 2002. Animation: can it facilitate? International journal of human-computer studies 57, 4 (2002), 247–262.
\ [76] Vladimir L Uskov, Jeffrey P Bakken, Ashok Shah, Nicholas Hancher, Cade McPartlin, and Kaustubh Gayke. 2019. Innovative InterLabs system for smart learning analytics in engineering education. In 2019 IEEE Global Engineering Education Conference (EDUCON). IEEE, 1363–1369.
\ [77] Katrien Verbert, Erik Duval, Joris Klerkx, Sten Govaerts, and José Luis Santos. 2013. Learning analytics dashboard applications. American Behavioral Scientist 57, 10 (2013), 1500–1509.
\ [78] Katrien Verbert, Sten Govaerts, Erik Duval, Jose Luis Santos, Frans Assche, Gonzalo Parra, and Joris Klerkx. 2014. Learning dashboards: an overview and future research opportunities. Personal and Ubiquitous Computing 18, 6 (2014), 1499–1514.
\ [79] Johan Wagemans, James H Elder, Michael Kubovy, Stephen E Palmer, Mary A Peterson, Manish Singh, and Rüdiger von der Heydt. 2012. A century of Gestalt psychology in visual perception: I. Perceptual grouping and figure–ground organization. Psychological bulletin 138, 6 (2012), 1172.
\ [80] Danding Wang, Qian Yang, Ashraf Abdul, and Brian Y Lim. 2019. Designing theory-driven user-centric explainable AI. In Proceedings of the 2019 CHI conference on human factors in computing systems. 1–15.
\ [81] Alyssa Friend Wise. 2014. Designing pedagogical interventions to support student use of learning analytics. In Proceedings of the fourth international conference on learning analytics and knowledge. 203–211.
\ [82] Anbang Xu, Huaming Rao, Steven P Dow, and Brian P Bailey. 2015. A classroom study of using crowd feedback in the iterative design process. In Proceedings of the 18th ACM conference on computer supported cooperative work & social computing. 1637–1648.
\ [83] Robert E Yager. 1991. The constructivist learning model. The science teacher 58, 6 (1991), 52.
\ [84] John Zimmerman, Jodi Forlizzi, and Shelley Evenson. 2007. Research through design as a method for interaction design research in HCI. In Proceedings of the ACM CHI. ACM, 493–502.
\ [85] John Zimmerman, Erik Stolterman, and Jodi Forlizzi. 2010. An analysis and critique of Research through Design: towards a formalization of a research approach. In proceedings of the 8th ACM conference on designing interactive systems. 310–319.
We used the following questions to guide our semi-structured interviews:
\ • Please briefly describe your experiences with the courses dashboard.
\ • Do you think the class would be different with and without the dashboard? If so, how?
\ • How does how you use the courses dashboard compare with other learning management systems and environments? What is similar? Is anything different?
\ • Has using the dashboard shown you anything new or unexpected about your students’ learning? If yes, what?
\ • What do you understand about the analytics presented on the dashboard with submissions?
\ • Do you utilize analytics? If so, do they support in monitoring and intervening? Assessment and feedback? How?
\ • If the answer to ‘Do you utilize analytics’ is ‘No’: Do you think these analytics have the potential to become a part of the assessment and feedback that you provide to the students? If so, how?
\ • What do you think about showing these analytics to students on-demand?
\ • Did you click on ‘Scales’ analytics? How did seeing its relationship with the actual design work affect your utilization (or potential utilization) for assessment and feedback?
\ • Did you click on ‘Clusters’ analytics? How did seeing its relationship with the actual design work affect your utilization (or potential utilization) for assessment and feedback?
\ • Has using the dashboard to follow and track student design work changed how you teach or interact with the students? If so, how?
\ • What would you do different, if anything, next time you teach the class?
\ • What are your suggestions for making the dashboard more suited for your teaching and assessment practices? Or for design education in general?
\ \
:::info Authors:
(1) Ajit Jain, Texas A&M University, USA; Current affiliation: Audigent;
(2) Andruid Kerne, Texas A&M University, USA; Current affiliation: University of Illinois Chicago;
(3) Nic Lupfer, Texas A&M University, USA; Current affiliation: Mapware;
(4) Gabriel Britain, Texas A&M University, USA; Current affiliation: Microsoft;
(5) Aaron Perrine, Texas A&M University, USA;
(6) Yoonsuck Choe, Texas A&M University, USA;
(7) John Keyser, Texas A&M University, USA;
(8) Ruihong Huang, Texas A&M University, USA;
(9) Jinsil Seo, Texas A&M University, USA;
(10) Annie Sungkajun, Illinois State University, USA;
(11) Robert Lightfoot, Texas A&M University, USA;
(12) Timothy McGuire, Texas A&M University, USA.
:::
:::info This paper is available on arxiv under CC by 4.0 Deed (Attribution 4.0 International) license.
:::
\
2025-12-10 05:32:30
You can and should test private methods
2025-12-10 05:29:00
Ethereum (ETH) has been retracing sharply, and its approach toward multi-year support zones has prompted renewed activity among larger holders. The token has been revisiting areas that shaped major reversals between 2021 and 2025, and this shift has coincided with an aggressive accumulation trend among whales searching for what crypto to buy now before broader sentiment stabilizes.
This rotation has unfolded at the same time Mutuum Finance (MUTM) records a rapid surge in demand, with its presale Phase 6 already 95% filled and widely viewed as one of the best cryptos to buy now as the testnet launch nears. Many traders examining the best cryptocurrency to invest in have begun shifting their focus toward early-stage opportunities that show measurable progress and sustained momentum.
Ethereum has been struggling to maintain its recent rally, with price action retreating toward the $2835 zone and losing 5.16% within a short window. This decline has brought ETH back into a historical cluster of support levels formed between 2021 and 2025, a region where repeated rebounds shaped multi-year cycles.
The token has been trading inside a very tight long-term structure between $1090 and $4900, and the market is once again reacting to the lower boundary. Immediate support now sits near $2400, a level that was tested across 2023 and 2024 and often triggered renewed accumulation. Below this, $1480 and $1090 remain critical zones that marked past capitulation phases and subsequent recoveries.

Short-term pressures have been intensifying as ETH approaches resistance around $2960. If price fails to regain that level, traders are watching $2820, $2800, and $2740 as potential areas where sellers may attempt to extend control. Indicators reflect weakening strength, as the MACD continues in a bearish configuration and the RSI remains below the 50 line.
Exchange balances have also been declining, falling from 16.5 million ETH in early December to nearly 12.5 million ETH by November, showing a multi-month reduction in supply on major platforms. This trend has raised the question whether shrinking balances may eventually support a recovery or intensify volatility as the market tests deeper supports.
Interest has been shifting toward Mutuum Finance (MUTM), a new crypto coin gaining notable traction among investors scanning the market for the best crypto to invest in ahead of early development milestones. Phase 6 of the presale is currently 95% sold, and the token price stands at $0.035. The raise has reached $19,150,000 since launch, while total holders now stand at 18,330.
This phase is expected to close shortly, and once it does, Phase 7 will activate with a nearly 20% price increase to $0.04 before advancing toward the confirmed $0.06 listing price. Those entering now still have exposure to approximately 340% upside at launch, a factor encouraging traders evaluating which crypto to buy today for long-term returns to move quickly.

The speed at which Phase 6 is selling out has created a sense of urgency among investors who do not want to miss their final chance to secure MUTM at $0.035 before the increase locks in. Many traders who have been searching for the next big cryptocurrency have referenced this structured presale progression as a major point of confidence.
Mutuum Finance (MUTM) has been amplifying engagement through its $100,000 giveaway, set to award ten participants $10,000 each in MUTM. This initiative has increased visibility among those analyzing what crypto to buy now, particularly at a time when the broader market remains selective.
The team has confirmed that the V1 protocol launch will arrive on the Sepolia testnet in Q4 in 2025, featuring liquidity pools, mtToken issuance, debt tokens, and a liquidator bot. ETH and USDT will be supported initially. This progress continues to reinforce the perception that MUTM is one of the best cryptos to buy now due to its steady roadmap execution and clear product evolution.
Mutuum Finance (MUTM) has also expanded its dashboard, adding a top-50 leaderboard and a daily 24-hour competition. The user who secures the #1 spot receives a $500 MUTM reward after completing at least one transaction during that period, and the board resets at 00:00 UTC. This feature has strengthened community activity and highlighted MUTM’s dynamic approach to early-stage participation.
Ethereum’s technical pressures have pushed whale activity toward alternative opportunities, further amplifying the rise of MUTM as traders search for the best crypto to buy now. Anyone evaluating what crypto to buy now for exposure ahead of a testnet launch may find their window narrowing quickly as Phase 6 nears completion.
For more information about Mutuum Finance (MUTM) visit the links below: \n Website:https://mutuum.com/
:::tip This story was published as a press release by Btcwire under HackerNoon’s Business Blogging Program. Do Your Own Research before making any financial decision.
:::
\ \
2025-12-10 04:00:12
Prior Work and 2.1 Educational Objectives of Learning Activities
3.1 Multiscale Design Environment
3.2 Integrating a Design Analytics Dashboard with the Multiscale Design Environment
5.1 Gaining Insights and Informing Pedagogical Action
5.2 Support for Exploration, Understanding, and Validation of Analytics
5.3 Using Analytics for Assessment and Feedback
5.4 Analytics as a Potential Source of Self-Reflection for Students
Discussion + Implications: Contextualizing: Analytics to Support Design Education
6.1 Indexicality: Demonstrating Design Analytics by Linking to Instances
6.2 Supporting Assessment and Feedback in Design Courses through Multiscale Design Analytics
\
For learning analytics to be effective in open-ended, project-based contexts, there is a need to assess complex characteristics that can give insights into students’ creative strategies and abilities [13]. Toward addressing this need, our study investigates how multiscale design analytics support instructors’ assessment efforts in creative project-based learning contexts of design courses.
\ Instructors in our study reported that multiscale design analytics can support them directly or indirectly in assessment and feedback processes. Instructors found that multiscale analytics have the potential to inform pedagogical intervention, based on whether or not students are able to effectively utilize the design environment. In I9’s words, “So if this number is extremely low for everybody…then maybe you need to [give] a tutorial [on the design environment].” Instructors shared that providing these analytics to students can help them reflect and improve their multiscale design skills. For example, in I1’s words, “I would love students to explore more zoom levels…they don’t really utilize being able to…zooming in to certain parts and elaborating.”
\ Implications. Our study demonstrates the potential of multiscale design analytics—which measure complex characteristics of design work—to assist instructors in assessing student work. The organization principles that design instructors expect their students to demonstrate map to the create category in Bloom’s revised taxonomy, i.e., “put elements together” into a “new, coherent whole” [2, 46]. As I4 expressed, “I think that I would definitely like to assign scales as a part of the rubric to say, I would like to see the big picture from out here, and then when you zoom in, see more.” Likewise, I1 expressed that cluster analytics could help students reflect on their design representation and become “more aware about how they separate”. We thus find that multiscale design analytics empower instructors in assessing students’ holistic thinking and creative capabilities.
\ We advocate for future research that investigates further development of multiscale design analytics, visual annotations, and dashboard interactions, as well as more diverse and in-depth studies, in order to develop new knowledge about how to support instructors, in developing effective pedagogical interventions, and students, in learning how to do design that involves thinking about and presenting complex information. This line of research has the potential to create new educational avenues for teaching how to present complex information that supports audiences in micro and macro readings—i.e., details and overviews [21]—and the formation of mental models [32] and maps. Since conveying and understanding such information is vital in so many areas of society, this mission has the potential for broad impact that benefits society, through the work these students will perform throughout their careers.
\ Further, in design course contexts—where providing frequent feedback is vital—AI-based analytics demonstrate their utility in scaling the assessment. For example, I9 finds using analytics “better than having to go to every [design] and look for every single issue or having a much larger rubric [to run] by.” Instructors in diverse project-based learning contexts—e.g., arts and humanities [23]—engage students in creative, open-ended work. Thus, these contexts are similarly expected to benefit from analytics based on assessments of complex characteristics.
\ The current research provides evidence for multiscale design measures to serve as descriptive analytics, i.e., analytics that provide insights into student work [76]. Going ahead, with data from the past iterations of a course, these analytics have the potential to function as prescriptive analytics, i.e., provide instructors and students alerts and suggestions based on computational modeling of the relationship between analytics and students’ course performance [6, 76]. On demand feedback through analytics has the potential to stimulate students’ learning-by-doing. Further, incorporating multiscale design analytics in widely distributed tools, such as Photoshop and Illustrator, has the potential to bring widespread benefits, as students use these tools in diverse design course contexts.
Multiscale design analytics are not a panacea. On the one hand, our findings show that the present multiscale design analytics provide value to instructors, in situated course contexts. I9 would like to see that students are able to effectively use the multiscale design environment. I2 values students’ presentation of structure at different levels. I1 would “love students to explore more zoom levels”, as she does not see them “zooming in to certain parts and elaborating”.
\ On the other hand, multiscale design analytics were not found to serve as a catch-all measure for design. I4 talks about not wanting all designs to look the same “like you don’t want to go somewhere and see every painting looks the same”. I1 says, “I would rather not control…how they see spatial clusters”.
\ We build theory using creative cognition’s family resemblance principle, according to which, no particular characteristics are required for a work to be deemed creative [70]. Rather, a family of traits tends to serve as indicative. We find that multiscale design, as measured here, functions as one such design creativity trait. As another fruitful avenue for future research, we identify deriving analytics for families of design creativity traits—for example, feasibility, originality, and aesthetics [3, 20]; and gestalts principles, e.g., proximity, closure, continuity, symmetry, parallelism, and similarity of color, size, and orientation [79]—and applying these traits in education and even crowd-sourced design contexts. According to the family resemblance principle, as no particular trait is sufficient, design creativity analytics will never be perfect. But inasmuch as they work well enough, they can provide instructors, students, and other designers with insights so as to (1) provide first-order assessment; and (2) stimulate ongoing work.
\ \
:::info Authors:
(1) Ajit Jain, Texas A&M University, USA; Current affiliation: Audigent;
(2) Andruid Kerne, Texas A&M University, USA; Current affiliation: University of Illinois Chicago;
(3) Nic Lupfer, Texas A&M University, USA; Current affiliation: Mapware;
(4) Gabriel Britain, Texas A&M University, USA; Current affiliation: Microsoft;
(5) Aaron Perrine, Texas A&M University, USA;
(6) Yoonsuck Choe, Texas A&M University, USA;
(7) John Keyser, Texas A&M University, USA;
(8) Ruihong Huang, Texas A&M University, USA;
(9) Jinsil Seo, Texas A&M University, USA;
(10) Annie Sungkajun, Illinois State University, USA;
(11) Robert Lightfoot, Texas A&M University, USA;
(12) Timothy McGuire, Texas A&M University, USA.
:::
:::info This paper is available on arxiv under CC by 4.0 Deed (Attribution 4.0 International) license.
:::
\