Live data

Field Kit

A consulting OS with 40+ tools across 3 MCP servers. This is a live view into the knowledge base — trends, papers, regulations, frameworks, and brand intelligence.

330
Trends
297
Papers
10
Regulations
49
Frameworks
22
Methods
17
Reports

Trend Radar

Integrated Management System Convergence: AI + Sustainability + Innovation on Shared Backbone

A structural convergence is emerging across three ISO management system standards that were previously treated as separate compliance tracks: AI governance (ISO 42001:2023), innovation management (ISO 56001:2024), and environmental management (ISO 14001:2015). All three share the Annex SL High-Level Structure — identical clause numbers (4-10), shared PDCA cycle, common definitions, and compatible audit frameworks. This isn't coincidental; ISO designed it this way. But organizations are only now ...

emerging
1d ago
ISO-42001ISO-56001ISO-14001integrated-management-system

The Rising Cost of Verification: Trust Erosion as Structural Trend

Analysis of a Swedish premium news digest (March 2026) reveals a unifying pattern across seemingly unrelated headlines: the rising cost of determining what is true. Eight stories spanning elite accountability (Epstein docs), geopolitical recruitment (Russia/Africa), AI-generated spectacle (fake celebrity wedding), cultural revisionism (stealth book edits), economic absurdity (Big Mac vs Mac Mini pricing crossover), and financial scandal (watch-bragging bankers) all converge on a single dynamic: ...

emerging
2d ago
verification-costtrust-erosionnarrative-economymedia-analysis

Microsoft Agent 365 & E7: Enterprise agentic AI with identity governance

At Microsoft AI Tour Stockholm (March 2026), Microsoft unveiled Agent 365 (launching May 1) — an identity manager for AI agents that gives them role-based permissions and governance within the Microsoft ecosystem. The full stack is now Copilot → Fabric → Foundry → Agent 365, with Defender + Purview providing compliance guardrails. Agents can be governed with natural language policies defining what they may and may not do. A new licensing tier, Enterprise 7 (E7), will bundle all AI capabilities ...

accelerating
2d ago
microsoftagent-365copilotenterprise-ai

Ontology-first data strategy as prerequisite for enterprise AI activation

Converging signal from multiple independent sources at Microsoft AI Tour Stockholm (March 2026): before data can be "AI-activated," organizations need to understand their data, map their processes, and describe them in an ontology. A Databricks consultant confirmed the ontology is never "done" — it's a living artifact you work toward continuously. Speed benchmarks are emerging: Palantir Foundry typically takes 2 weeks from start to first production service. Rubrik demonstrated 5 days within nar...

emerging
2d ago
ontologydata-activationpalantirrubrik

Commodity Content Trap — Value Migration from Media to Higher Education

## Pattern: When production commodifies, value migrates to curation, exclusivity and presence Cross-domain pattern observed across three media sectors in 2025-2026, with direct implications for higher education strategy: ### Evidence streams **1. Swedish podcasts move behind paywalls (2025-2026)** Sweden's biggest podcasts (Ursäkta, Alex & Sigge, Gynning & Berg, Rättegångspodden) are migrating to Podme's paywall model. Nordicom analyst Ulrika Facht frames it as a maturity phase: "To begin wit...

accelerating
2d ago
value-migrationcontent-commoditizationhigher-educationpodcasts

Graphiti: Temporal Context Graphs for AI Agent Memory

Graphiti (by Zep) is an open-source temporal context graph engine that builds "context graphs" — knowledge graphs where every fact has a validity window (when it became true, when it was superseded). Unlike traditional static knowledge graphs, entities evolve over time with updated summaries, and everything traces back to episodes (raw source data). Supports Neo4j, FalkorDB, and Kuzu as graph backends. Now includes an MCP server implementation for integration with AI assistants. Observable Patt...

emerging
1w ago
knowledge-graphstemporalneo4jmcp

AI Task Duration Doubling Every 7 Months (METR Data)

Research from METR shows the length of tasks AI systems can autonomously complete (at 50% success rate) is doubling approximately every 7 months for software engineering, with similar or faster rates for scientific reasoning, mathematics, computer use, and simulated robotics. Autonomous driving doubles substantially slower at every 1.7 years. GPT-5 can complete 217-minute software engineering tasks at 50% success, 26-minute tasks at 80% success. Observable Patterns: Exponential improvement in t...

accelerating
1w ago
ai-capabilitiesbenchmarkingagentic-aitask-autonomy

Agentic AI Production Barbell: 40% Adoption, 40% Cancellation by 2027

Gartner projects 40% of enterprise applications will embed task-specific AI agents by mid-2026, up from less than 5% in early 2025 — an eight-fold increase. Simultaneously, industry analysts predict more than 40% of agentic AI projects will be canceled by 2027 due to escalating costs and unclear business value. This creates a barbell dynamic: massive adoption intent on one end, massive failure rate on the other. The gap between prototype and production is technical, not conceptual — token costs ...

accelerating
1w ago
agentic-aienterprise-adoptionproduction-gapgartner

Latest Papers

Getting around the task-artifact cycle: how to make claims and design by scenario

John M. Carroll, Mary Beth Rosson · ACM Transactions on Information Systems

This foundational paper introduces the task-artifact cycle — the observation that new artifacts change the tasks they were designed to support, which in turn creates new requirements for artifacts. Carroll and Rosson propose scenario-based design and design claims as methods for breaking out of this cycle, enabling designers to reason about the effects of their designs on user tasks before building them.

HCI1992

Scenarios: Uncharted waters ahead

Pierre Wack · Harvard Business Review

This seminal article describes how Royal Dutch/Shell developed and used scenario planning to anticipate the 1973 oil crisis and subsequent energy market disruptions. Wack explains how scenarios differ from forecasts — they are not predictions but tools for changing mental models and improving strategic thinking under uncertainty. The article established scenario planning as a core strategic management discipline.

Policy1985

We shape our agents, and our agents shape us

Min Kyung Lee, Sylvia See · Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems

This paper examines the bidirectional relationship between humans and AI agents: how people shape AI through their interactions, preferences, and feedback, and how AI agents in turn shape human behavior, expectations, and decision-making patterns. The paper draws on social shaping of technology theory to analyze how co-adaptation between humans and AI creates emergent sociotechnical dynamics.

HCI2022

Empathy probes

Tuuli Mattelmäki, Katja Battarbee · Proceedings of the Participatory Design Conference

This paper introduces empathy probes as a design research method that extends cultural probes with a focus on building empathic understanding between designers and users. The probes are designed to capture emotional, experiential, and contextual aspects of people's lives that are difficult to access through interviews or observation alone, enabling richer and more empathic design processes.

HCI2002

Voice Interfaces in Everyday Life

Martin Porcheron, Joel E. Fischer, Stuart Reeves +1 · Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems

This paper examines how voice interfaces are used in everyday domestic settings through conversation analysis of interactions with Amazon Echo devices. It reveals how voice interfaces are woven into ongoing household activities and social interactions, identifying challenges around conversational repair, sequential organization, and the social accountability of talking to a device in shared spaces.

HCI2018

Artificial intelligence, systemic risks, and sustainability

Victor Galaz, Miguel A. Centeno, Peter W. Callahan +12 · Technology in Society

This paper examines the systemic risks that artificial intelligence poses to sustainability goals. It identifies cascading failure modes where AI systems interact with social, economic, and ecological systems, potentially amplifying risks rather than mitigating them. The framework connects AI governance to sustainability frameworks, arguing for integrated approaches that consider AI's systemic effects on planetary boundaries and social foundations.

Sustainability2021

Knowledge Base

49
Frameworks
11
Cases
22
Methods

Frameworks by Domain

AI Implementation
23
Strategy
19
Sustainability
4
Service Design
3

Brands Tracked

Lund University
higher-education
3 analyses
Uppsala University
higher-education
3 analyses
Linköping University
higher-education
2 analyses
University of Gothenburg
higher-education
2 analyses
Karolinska Institutet
higher-education
1 analysis
Stockholm School of Economics
higher-education
1 analysis
Halmstad University
higher-education
1 analysis
Kristianstad University
higher-education
1 analysis
Stockholm University
higher-education
1 analysis
Örebro University
higher-education
1 analysis
KTH Royal Institute of Technology
higher-education
1 analysis

Powered by 3 MCP servers · Supabase + pgvector · Refreshed hourly

Full Case Study →