Home Programs PathwayAI Team AI Intel Fund Our Work Donate Contact
AI Intel
AI & Grant Intelligence · Published Every Monday · Two Services · One Architecture

Agentic AI Agents That Find Grants —
and Track the Four Forces
Reshaping Your World.

The same TCAF-governed agentic architecture that powers PathwayAI delivers two things: a paid grant subscription that tells you how to frame, position, and submit your strongest possible application, and a free weekly intelligence feed tracking the four pillars every equity-focused organization needs to understand.

What you are watching in the ticker above is DJMP's agentic intelligence workflow — TCAF-governed, updated every Monday. The same architecture powers PathwayAI and this intelligence feed.

See Why We're Different
DJMP News · Primary Sources
The Students Who Built AI Before Anyone Funded It.
Real outcomes. Primary sources. Four years of proof — CBS News, Chicago Sun-Times, ABC7, Cook County Board.
Read the Stories
Live Proof of Concept

What you are watching is DJMP's agentic intelligence workflow — TCAF-governed, updated every Monday.

PI — $750K DOE MSEIP Grant · Kennedy-King College · 2012–2015
Northwestern MRSEC Research Fellow · 2015 & 2022 · One of four selected each cycle
PhD Candidate, AI · TCAF Researcher · AWS MLU Faculty Fellow 2026
Grant Intelligence · Built on Who You Are
Upload your org profile.
Get grants matched to your mission.
Find Grants. Apply Stronger. →
$9.99 one-time · $14.99/mo · $29.99/mo Pro · Grant writing from $250
↓   Choose a path above — or scroll to explore

Google finds the grant.
We tell you how to apply for it strategically.
That’s not the same thing.

Searching Online
A list of opportunities with no context about eligibility, framing, or what the funder actually wants
DJMP Grant Intelligence
Curated, contextualized, deadline-flagged, and framing-advised — so your application lands differently
Generic Results
Broad listings that may or may not apply to your mission, size, or geography
Mission-Filtered Intelligence
Every listing filtered through AI equity, workforce development, and civic technology — the work DJMP knows
No Application Guidance
You find the grant and figure out the rest yourself
Winning Application Tips
Every listing includes what the funder actually wants — written by a team that has applied and won
Find grants matched to your mission
Grant Intelligence · By Pillar

Select Your Domain.

Active grants tracked per domain · Updated every Monday · Subscribe for full listings

Workforce Development
7
Grants tracked · Updated weekly
🏗️
AI Governance
4
Grants tracked · Updated weekly
🔐
Cybersecurity
3
Grants tracked · Updated weekly
🤖
Agentic AI
5
Grants tracked · Updated weekly
🧬
STEM & Youth
6
Grants tracked · Updated weekly
✳️
Custom Pillar
?
Built around your org
Grant counts updated weekly · Full listings available to subscribers · Track A / Track B system
Choose your level of intelligence
Five Access Tiers

Choose Your Level of Intelligence

The AI feed is always free. The grant intelligence is for any AI equity, workforce development, or civic technology nonprofit that needs to find and win funding — written by a team that has applied and won in exactly this space.

Grant reports published every Monday  ·  Deadline alerts at 30, 14, and 7 days  ·  Grant writing from $250
Free · Always On
$0
no signup · no paywall · always live
  • Live AI intelligence feed
  • DJMP News — primary sources
  • Agentic AI · AI Ethics · AI Equity
  • Cybersecurity workforce trends
  • Updated every Monday
Open Free Feed ↓
Weekly Report
$9.99
one-time · this edition · yours forever
  • This week's full grant list
  • Deadline flags on every listing
  • Eligibility checklists
  • Application tips per listing
  • Permanent access to this edition
Next week's report requires a new purchase or subscription
Get This Week →
Grant Writing
From $250
per engagement · written by a funded PI
  • Application review — $250
  • Full application — $1,500
  • Application suite — $3,000
  • Monthly retainer — from $800
  • Written by a $750K federal PI
  • TCAF-framed for AI funders
We don't just find the grant. We write it with you.
Start the Conversation →
Pro Subscriber
$29.99
per month · full-service · cancel anytime
  • Every Monday's full report
  • Profile-matched grant alerts
  • 30 · 14 · 7 day deadline warnings
  • Full archive — every past edition
  • Pillar-specific grant scoring
  • Monthly 1:1 strategy call
  • Priority access to new grants
Everything in Subscriber · plus strategy access
Go Pro →
Org Partner
Custom
workforce boards · foundations · consultants
  • White-label grant feed
  • Co-branded weekly reports
  • Early access to new listings
  • Monthly strategy call
  • Joint application support
  • Direct research team access
Partner With Us →
Reports published every Monday  ·  Codes delivered within 2 hours (9am–9pm CT) · overnight by 9am CT  ·  Stripe secure checkout  ·  Powered by DJMP agentic workflow
Meet the Builders
The grant intelligence is written by a team that has applied — and won.
PI — $750K U.S. DOE MSEIP Grant · Kennedy-King College
Northwestern MRSEC Research Fellow · 2015 & 2022
PhD Candidate, AI · AWS MLU Faculty Fellow 2026 · TCAF Researcher
Grant Writing Services → Meet the Team →
See a free grant intelligence sample
Grant Intelligence · Free Preview
🔓 Free Access · No Subscription Required

LinkedIn Future of Work Fund 2026 — $200,000–$300,000

LinkedIn's Future of Work Fund provides financial grants and in-kind support for nonprofits preparing young adults for AI-driven careers through AI literacy, job access, workforce success, and system-level innovation. Specifically prioritizes organizations supporting career starters overcoming barriers to economic opportunity.

Eligibility Checklist
  • 501(c)(3) nonprofit status — required
  • Focus on AI literacy, workforce access, or system-level workforce innovation
  • Direct service to young adults facing systemic barriers to economic opportunity
  • Demonstrated program track record — not concept-stage organizations
  • U.S.-based operations
Application Tip: Lead with your autonomous AI system or measurable community outcome — not your mission statement. LinkedIn's reviewers are looking for nonprofits already doing the work, not planning to.
Amount$200K – $300K
TypeGrant + In-Kind Support
DJMP Status✓ Applied · Mar 15, 2026
Results60–90 Days
Fund Details →
🔒
Grant Intelligence Feed · Subscribers Only

8 More Active Grants.
Two Tracks. Every Deadline Flagged.

Every week DJMP's research team scans Grants.gov, NSF, NIH, DOL, Candid, Ford Foundation, Gates Foundation, AWS Nonprofits, Google.org, Salesforce.org, and LinkedIn — filtered through the DJMP mission. Each grant is now tracked on two tracks: grants DJMP leads, and grants worth knowing even when DJMP is ineligible as lead.

⚡ APPLY NOW — Google.org AI for Government Innovation — $1M–$3M · Deadline April 3, 2026
Track 1: DJMP Eligible as Lead · PathwayAI centerpiece · Agentic AI for civic services
AWS Imagine Grant 2026–2027 — Up to $100K + Cloud Credits
Track 1: DJMP Eligible · AI for nonprofit mission delivery · 501(c)(3) required
NSF CyberAICorps Scholarship for Service — Up to $2.5M · April 3, 2026
Track 2: Informational Only · DJMP ineligible as lead · Partner/funder awareness
Google.org Accelerator: Generative AI — Share of $30M
Track 1: DJMP Eligible · 6-month accelerator + Google Cloud credits
OpenAI People-First AI Fund — Up to $250K
Track 1: DJMP Eligible · Community-led AI design · Youth involvement
Report
$9.99
one-time · this week's full list
Get This Week →
Partner
Custom
org · white-label · joint grants
Contact Us →
Already a subscriber? Enter your access code
How It Works

Five Steps. Every Monday.

01
📋
Upload
Your org profile. Mission. Programs. One upload.
02
🎯
Match
Grants matched to your pillars. Not a generic list.
03
Verify
URLs live. Deadlines confirmed. Eligibility checked.
04
📊
Score
Win likelihood. Funder rubric. Gap analysis.
End Goal
05
🚀
Submit
Application drafted. Human reviewed. Submitted.
Agentic workflow · Upload → Match → Verify → Score → Submit · Every Monday
Free Intelligence Feed · Select a Topic

Choose a topic above to begin reading.

Six curated intelligence feeds — DJMP News, AI Governance, Agentic AI, AI Ethics, AI Equity, and Cybersecurity — updated weekly and powered by DJMP's agentic research architecture.

DJMP Institute · News & Recognition

DJMP Stories

Ongoing · Primary Sources · All Links Verified
CBS News Chicago · April 2023
Justis Walker — $40,000 Amazon Future Engineer Scholarship
DJMP Institute student Justis Walker named among 13 CPS students receiving the Amazon Future Engineer scholarship — one of 400 nationally. Featured nationally by CBS News Chicago.
Read → CBS News
Chicago Sun-Times · July 2017
Stay Safe App Unveiled at Rainbow PUSH 46th Annual Convention
DJMP students mapped every shooting in Chicago from May–June 2017, built a heat map giving students a safer route home, and presented the Stay Safe App to national recognition at Rainbow PUSH.
Read → Sun-Times
ABC7 Chicago · 2017
Rainbow PUSH Convention — DJMP Students Featured
ABC7 Chicago covered the Rainbow PUSH Annual Convention featuring DJMP's student technology demonstrations, including the Stay Safe App that mapped Chicago shooting data to protect student commutes.
Read → ABC7
Chicago Crusader · 2017
Cook County Board Honors PUSH Excel STEAM Program
The Cook County Board of Commissioners formally recognized the PUSH Excel STEAM program — the partnership within which DJMP's Saturday Academy and Stay Safe App were developed and delivered.
Read → Crusader
DJMP Institute · February 2026
Martin Pieters Receives AWS Machine Learning University Credential
Martin Pieters recognized at the Tuskegee University AWS Machine Learning University Symposium as Faculty Fellow — the only educator from City Colleges of Chicago in the cohort.
Read → Team Page
DJMP Institute · March 2026
Google.org Impact Challenge Application Submitted — $1.5M
DJMP Institute submitted its PathwayAI application to the Google.org Impact Challenge — $1.5M over 36 months to deploy autonomous AI civic agents connecting Chicagoland residents to federal workforce funding.
Read → Newsroom
Stories sourced from primary press coverage · Updated as new coverage appears · Full Newsroom →
🏗️ AI Governance · TCAF · Pieters, 2026 · A leading U.S. research university

They Certified the Builders.
We Certified the Standard the Builders Should Have Met.

Week of April 6, 2026 · Updated weekly · TCAF · Pieters, 2026 · A leading U.S. research university

As AI Agents Take on More Tasks, Governance Becomes the Defining Priority of 2026.

Published today, AI News reports that autonomous AI systems are no longer just generating answers — they are planning tasks, making decisions, and executing actions with limited human input. The question has shifted from "does the model give the right answer?" to "what happens when the model is allowed to act?" Governance frameworks that were never designed for autonomous execution are breaking under the weight of that question. TCAF was built to answer it — secure, ethical, equitable, and scalable by architectural design, before the first line of code. Not retrofitted after deployment fails.

TCAF · Pieters, 2026 · A leading U.S. research university
The governance framework the field is now asking for — built before the question was public.
Source: AI News →

McKinsey: Only 30% of Organizations Have Mature Agentic AI Governance. The Gap Is Global and Consistent.

McKinsey's 2026 AI Trust Maturity Survey — 500 organizations across industries and regions — finds that average responsible AI maturity has improved to 2.3 out of 4, but only about one-third of organizations have achieved mature governance and agentic AI controls. The governance gap is not isolated — it is globally consistent across every region surveyed. Organizations investing $25M or more in responsible AI show significantly higher maturity and are far more likely to realize measurable business impact. The data is unambiguous: governance built into architecture from the start is a competitive advantage, not a compliance burden. TCAF is that architecture.

Source: McKinsey →

Singapore Releases the World's First Governance Framework Specifically for Agentic AI. The Bar Has Been Set.

Singapore's Infocomm Media Development Authority released the Model AI Governance Framework for Agentic AI — the first framework in the world to address the specific risks of autonomous AI systems. It recommends that organizations define agent boundaries, implement agentic guardrails, maintain meaningful human control, and build accountability into the full agent lifecycle — not appended after deployment. Resaro's co-CEO called it "the first authoritative resource addressing the specific risks of agentic AI." TCAF operationalizes all four of these properties as design requirements — built into PathwayAI's architecture before any national government had published a standard requiring it.

Source: Singapore IMDA →
Week of March 30, 2026

AWS Retired Its ML Specialty Certification. The Field Just Acknowledged What TCAF Already Built.

Amazon retired the AWS Certified Machine Learning – Specialty credential — the certification that defined AI validation for a generation. The reason: the field has moved from building models to deploying autonomous agents. TCAF fills the governance gap AWS just acknowledged.

Source: AWS →

Chatham House: Global AI Governance Is at Risk of Failure. Progress May Only Come After a Crisis.

Chatham House warns international AI governance is in structural deadlock — geopolitical fragmentation and institutional weakness make global coordination "close to impossible." Communities cannot wait for consensus. They need governance built into architecture now.

Source: Chatham House →
Week of March 24, 2026

EU AI Act Enforcement Begins. $500K Per Violation. Architecture-First Organizations Are Already Compliant.

The EU AI Act's high-risk AI provisions entered active enforcement — with penalties reaching €30 million or 6% of global revenue. U.S. state-level equivalents advancing in Colorado, California, and Illinois. Organizations that designed governance into their architecture from the first line of code are not scrambling to retrofit compliance.

Source: EU AI Act →

NIST AI RMF 2.0: Agentic AI Requires Governance at the Architecture Level — Not the Audit Level.

NIST's updated AI RMF and its 2026 generative AI companion explicitly call out agentic systems as requiring governance frameworks embedded in design — not appended after deployment. TCAF operationalizes exactly these properties as design requirements. PathwayAI was built to this standard two years before the framework was updated to require it.

Source: NIST AI RMF →
Intelligence updated weekly · Week of April 6, 2026 · TCAF · Pieters, 2026 · A leading U.S. research university · PathwayAI →
🤖 Agentic AI · Pillar 3

Gartner: 40% of Enterprise Applications Will Embed AI Agents by End of 2026. DJMP Has Been Running for Two Years.

Week of April 6, 2026 · Updated weekly · Powered by DJMP agentic architecture

Deloitte: Agentic AI Will Deliver Truly Customized Government Services at Scale. The Infrastructure Is Finally Ready.

Deloitte Insights reports that agentic AI enables government services to move beyond digital forms toward fully customized, proactive service delivery — systems that match individual needs to the right services, securely access data across agencies, and guide users through end-to-end journeys. Agent-to-agent coordination (MIT Media Lab's Project NANDA) will allow personal AI agents to work directly with government agents to execute tasks like "register my business" or "pay my tax bill." Estonia's Bürokratt and Abu Dhabi's TAMM platform are already doing this at scale. PathwayAI has been doing it in Bronzeville for two years. The architecture was right. The world just caught up.

Source: Deloitte Insights →

"Agent Washing" Is the New AI Washing. The Gap Between Demo and Production Is Where Governance Fails.

A new analysis in The Week flags "agent washing" as 2026's most consequential governance risk — legacy automation tools with conversational interfaces being marketed as autonomous agentic AI. These systems perform in demos but collapse in real-world complexity. The deeper problem: when organizations give excessive discretion to badly governed systems, errors cascade through interconnected processes. Governance is not a compliance layer added after deployment. It is a design decision made at the architecture stage. TCAF operationalizes exactly this — and PathwayAI is the proof of concept that it works in production, not just in a lab.

Source: The Week →

Gartner: 40% of Enterprise Apps Will Embed Agents by 2026. Up from Less Than 5% Last Year.

Gartner projects the agentic AI market — valued at $9 billion in 2025 — will reach $139 billion by 2034 at a 35% compound annual growth rate. Yet while 40% of enterprise applications will embed AI agents by year-end, Deloitte's data still shows only 11% of organizations have agentic systems in production today. The gap between pilot and production is 2026's defining challenge. PathwayAI's five-agent agentic architecture connects Chicagoland residents to federal workforce resources — TCAF-governed, built simultaneously with the framework that evaluates it. That is not a pilot. That is the standard.

Explore PathwayAI →

NVIDIA, Oracle, and Salesforce Deploy Production Agentic Systems This Month. The Enterprise Agent Era Has Begun.

This month: Oracle announced 22 production-ready enterprise Fusion Agentic Applications handling supply chain, procurement, and financial operations autonomously. Salesforce deployed Agentforce at scale. NVIDIA's Jensen Huang declared that "employees will be supercharged by teams of frontier, specialized and custom-built agents they deploy and manage." The enterprise software industry is being redesigned for autonomous execution. DJMP's architecture was already there. The same agentic design that powers PathwayAI is the architecture every major enterprise is now racing to replicate.

Meet the Builder →

Gartner: 40% of Agentic AI Projects Will Be Scrapped by 2027. The Reason Is Governance — Not Technology.

Gartner's latest forecast warns that 40% of agentic AI projects will fail by 2027 — not because the models don't work, but because organizations cannot operationalize them. Legacy system integration, governance gaps, and the absence of architecture-first thinking are the killers. TCAF was built to solve exactly this problem. Secure by Architecture. Ethical by Design. Equitable by Intent. The governance framework built before the first line of code — not bolted on after deployment fails.

Our Programs →
Week of March 30, 2026

Gartner: 40% of Enterprise Apps Will Embed Agents by 2026. Up from Less Than 5% Last Year.

Gartner projects the agentic AI market — valued at $9 billion in 2025 — will reach $139 billion by 2034 at a 35% compound annual growth rate. Yet while 40% of enterprise applications will embed AI agents by year-end, only 11% of organizations have agentic systems in production today. The gap between pilot and production is 2026's defining challenge. PathwayAI's five-agent agentic architecture connects Chicagoland residents to federal workforce resources — TCAF-governed, built simultaneously with the framework that evaluates it.

Explore PathwayAI →

Gartner: 40% of Agentic AI Projects Will Be Scrapped by 2027. The Reason Is Governance — Not Technology.

Gartner warns 40% of agentic AI projects will fail by 2027 — not because models don't work, but because organizations cannot operationalize them. Legacy integration, governance gaps, and the absence of architecture-first thinking are the killers. TCAF was built to solve exactly this problem.

Our Programs →
Week of March 24, 2026

Deloitte: 38% Piloting Agentic AI. Only 11% in Production. DJMP Is in the 11%.

Deloitte's 2026 Emerging Technology Trends report finds the gap between pilot and production is 2026's defining challenge. McKinsey finds high-performing organizations are 3x more likely to have scaled agents — the differentiator is not the model, it is the willingness to redesign workflows around agent-first thinking.

Explore PathwayAI →

WEF: Governments Are Deploying Agents at Scale. The Civic AI Era Has Begun.

The World Economic Forum reports Singapore, Barcelona, Estonia, and the UK are deploying agentic AI into public services. The WEF calls this the "hybrid workforce" era — AI handles transactional work; humans retain ethical authority. DJMP's TCAF was designed for exactly this architecture.

Meet the Builder →
Intelligence updated weekly · Week of April 6, 2026
⚖️ Ethical by Design · Pillar 4

Congress Reintroduces the BIAS Act. Federal Agencies That Use AI Must Now Stand Up Civil Rights Offices.

Week of April 6, 2026 · Updated weekly

CEO Alliance for Mental Health: AI in Behavioral Health Must Be "Ethical by Design." Not Patched After Harm Is Done.

The CEO Alliance for Mental Health — representing major behavioral health organizations — declared 2026 priorities around AI: "Ethical Stewardship and Protection" requires AI to be "ethical by design," with proactive safeguards for privacy, safety, and consumer rights built in from the start. The Alliance calls for AI to reduce disparities in care access, not reproduce them — and explicitly warns against allowing AI to replace the essential human element of mental health treatment. TCAF's Ethical by Design pillar was built for exactly this deployment context — systems used in crisis routing, suicide risk assessment, and behavioral health navigation, where the consequences of governance failure are measured in lives.

Source: CEO Alliance for Mental Health →

ISACA: AI Chatbots Have Been Linked to Suicides. Enterprises Must Proactively Anticipate Ethical Consequences Before Deployment.

ISACA's April 1 analysis documents that AI chatbots have been linked to the deaths of multiple people — some of them teenagers — and that low-paid content moderation workers are suffering PTSD from training data exposure, yet are barred by NDAs from discussing it with mental health professionals. Laws and regulations do not address many of these outcomes. ISACA concludes that enterprises must proactively anticipate the ethical consequences of AI systems before deployment. This is the architecture argument TCAF has been making since its first draft: ethics built in at the design stage is not optional — it is the only way to prevent harm that cannot be undone.

Source: ISACA →

Rep. Summer Lee & Sen. Markey Reintroduce the BIAS Act: Every Federal AI Agency Must Have a Civil Rights Office.

The Eliminating Bias in Algorithmic Systems (BIAS) Act — reintroduced in January and still advancing — would require every federal agency that uses, funds, or oversees AI to establish a dedicated Office of Civil Rights focused on algorithmic accountability. The National Urban League called it "long overdue," citing how opaque algorithms "reinforce systemic inequities that disproportionately harm Black, Brown, Indigenous, and immigrant communities." Algorithms already discriminate in hiring, housing, credit, and criminal justice. TCAF's "Ethical by Design" pillar is the architectural response — built before deployment, not mandated after harm is documented.

Source: Rep. Summer Lee →

Brookings: Algorithmic Exclusion Is a Class of Harm Equal to Bias. When AI Can't See a Community, It Can't Serve Them.

Brookings defines algorithmic exclusion as a structural harm distinct from bias — when AI systems lack enough data on certain individuals to return any output at all. The populations most affected: low-income Americans, new immigrants, survivors of domestic violence, and the digitally disconnected. These are not data gaps. They are "data deserts" — zones where AI cannot function, and where the same economic forces that marginalize people offline also erase them from training data. TCAF was built on exactly this insight before the report was written. PathwayAI connects those populations to federal workforce resources in real time — not because the data was clean, but because the architecture was designed to see them.

Source: Brookings Institution →

OpenAI Foundation: $1 Billion in Grants. Workforce + Equity Are the Focus Areas. Round 2 Expected.

OpenAI Foundation pledged $1 billion in grants targeting AI's impact on jobs, the economy, and children's mental health. Round 2 of the People-First AI Fund is expected. DJMP's mission is direct alignment — communities leading AI design, not just consuming it. Subscribe for full application analysis and deadlines.

Subscribe for Full Analysis →
Week of March 30, 2026

Rep. Summer Lee & Sen. Markey Reintroduce the BIAS Act: Every Federal AI Agency Must Have a Civil Rights Office.

The Eliminating Bias in Algorithmic Systems (BIAS) Act — reintroduced in January and still advancing — would require every federal agency that uses AI to establish a dedicated Office of Civil Rights focused on algorithmic accountability. Algorithms already discriminate in hiring, housing, credit, and criminal justice. TCAF's "Ethical by Design" pillar is the architectural response.

Source: Rep. Summer Lee →

Brookings: Algorithmic Exclusion Is a Class of Harm Equal to Bias. When AI Can't See a Community, It Can't Serve Them.

Brookings defines algorithmic exclusion as a structural harm — when AI systems lack enough data on certain individuals to return any output at all. These are "data deserts" — zones where AI cannot function, and where the same forces that marginalize people offline also erase them from training data. TCAF was built on this insight. PathwayAI was designed to see them.

Source: Brookings Institution →
Week of March 24, 2026

Brookings: "Algorithmic Exclusion" Is a Class of Harm Equal to Bias — and It Hits Bronzeville First.

Brookings released a major policy proposal arguing that when AI systems fail to generate outputs for certain populations due to missing data — not bias, but invisibility — that is a structural harm requiring regulatory remedy. The populations most affected: low-income Americans, new immigrants, and the digitally disconnected.

TCAF Framework →

$500,000 Per Violation. New Regulations Target Algorithmic Discrimination.

Emerging U.S. state laws and the EU AI Act create compliance mandates with real penalties — up to $500K per violation. Organizations that designed ethics into their architecture from the start are the ones positioned to lead.

Our Mission →
Intelligence updated weekly · Week of April 6, 2026
🌍 Equitable by Intent · Pillar 5

$2.75 Billion in Digital Equity Funding Canceled. Federal Workforce Programs Gutted. DJMP Is Already Moving.

Week of April 6, 2026 · Updated weekly

WHO and TU Delft Convene 30+ Global Experts: AI for Mental Health Must Be Governed by Ethics, Equity, and Human Well-Being — Not Efficiency Alone.

In March 2026, the World Health Organization and TU Delft's Delft Digital Ethics Centre convened over 30 international researchers, policymakers, clinicians, and advocates to establish priorities for responsible AI in mental health. WHO's Director of Data and Digital Health stated: "As AI increasingly interacts with people in moments of emotional vulnerability, these systems must be designed and governed with safety, accountability and human well-being at their core." WHO is now establishing a global Consortium of Collaborating Centres on AI for Health to ensure AI governance in health is grounded in evidence, ethics, and the needs of diverse populations. TCAF's Equitable by Intent pillar was built for exactly this design requirement — the communities most vulnerable to AI failure must be the ones most involved in shaping it.

Source: World Health Organization →

Trump Administration Canceled $2.75B in Digital Equity Act Funding. 20+ States Are Suing. Communities Are on Their Own.

The Trump administration canceled the entire $2.75 billion Digital Equity Act grant program — calling it a "DEI program" — overnight, states report. Over 20 states have filed federal lawsuits. Vermont's broadband director said the cancellation threatens workforce development and cyber-crime response. The National Skills Coalition reports the 2026 budget also proposes collapsing WIOA Adult, Youth, and Dislocated Worker programs into a single block grant and cutting Tribal Broadband Connectivity funding from $988 million to $24 million. As federal investment contracts, community-led AI infrastructure becomes more essential — not less. PathwayAI is already doing the work this funding was meant to support. We are not waiting for policy permission.

Source: National Skills Coalition →

24 Black Students Took the AP CS Exam in All of Illinois in 2013. 18 Failed. DJMP Was Built to Close That Gap.

One African-American female passed. This is the documented, measured, structural failure that drives every program DJMP delivers. Not an abstract equity statement — a specific data point that became a founding mission. Summer STEM 2026: 30 students. 6 labs. 4 weeks. Bronzeville. Robotics · Cybersecurity · AI · Game Design · Aviation · FLL. Free for every student.

The Mission →

DOL Issues AI Literacy Framework for States. But Funding to Implement It Has Been Cut.

The Department of Labor's Employment and Training Administration issued guidance allowing states to use WIOA funding for AI literacy programs — calling AI literacy "the gateway to opportunity." Simultaneously, the administration is proposing to consolidate and cut the very WIOA programs that would fund it. The federal government is telling states to build AI literacy and removing the money to do it. DJMP's PathwayAI and Fortinet pipeline fill exactly this gap — no federal permission required.

Source: Government Technology →
Week of March 30, 2026

Trump Administration Canceled $2.75B in Digital Equity Act Funding. 20+ States Are Suing. Communities Are on Their Own.

The Trump administration canceled the entire $2.75 billion Digital Equity Act grant program overnight. Over 20 states filed federal lawsuits. The National Skills Coalition reports the 2026 budget proposes collapsing WIOA Adult, Youth, and Dislocated Worker programs into a single block grant. PathwayAI is already doing the work this funding was meant to support.

Source: National Skills Coalition →

DOL Issues AI Literacy Framework for States. But Funding to Implement It Has Been Cut.

The Department of Labor issued guidance allowing states to use WIOA funding for AI literacy programs — calling AI literacy "the gateway to opportunity." Simultaneously, the administration is proposing to cut the very WIOA programs that would fund it. DJMP's PathwayAI and Fortinet pipeline fill exactly this gap.

Source: Government Technology →
Week of March 24, 2026

$1.44B in Digital Equity Act Funding Frozen. 11 Workforce Programs Proposed for Consolidation.

The National Skills Coalition reports the 2026 federal budget proposes collapsing WIOA Adult, Youth, and Dislocated Worker programs into a single block grant while freezing $1.44 billion in Digital Equity Act funding. PathwayAI is already doing the work this funding was meant to support.

PathwayAI in Action →

OpenAI Foundation: $1 Billion for Workforce Equity and Community-Led AI — Round 2 Expected

OpenAI Foundation pledged $1 billion in grants targeting AI's impact on workforce, equity, and children's mental health. The People-First AI Fund Round 2 is expected to open. DJMP's mission is exactly what these funders are looking for.

Subscribe for Full Analysis →
Intelligence updated weekly · Week of April 6, 2026
🔐 Cybersecurity · Pillar 6

4.8 Million Unfilled Cybersecurity Roles. Fortinet Is On Track to Train 1 Million People by End of 2026. DJMP's Pipeline Opens in April.

Week of April 6, 2026 · Updated weekly

92% of Security Professionals Are Concerned About AI Agents Across Their Workforce. 44% Say They Lack Visibility to Control Them.

New research finds 92% of security professionals are concerned about the security impact of AI agents operating across their organizations — with 44% citing AI agents accessing sensitive data as their top risk, 36% warning malicious prompts could compromise security, and nearly a third admitting they lack the observability or auditability to intervene once agents are deployed. Meanwhile, 77% of organizations now run generative AI in their security stack, but only 37% have a formal AI policy. The gap between deployment speed and governance is widening year over year. DJMP's TCAF addresses this directly — "Secure by Architecture" means governance is built into the agent before it is deployed, not discovered as a gap after it operates.

Source: Security MEA →

The SEC's 2026 Examination Priorities: AI and Cybersecurity Have Displaced Crypto as the Industry's Dominant Risk Topic.

The SEC's 2026 examination priorities signal a historic shift: concerns about cybersecurity and AI have displaced cryptocurrency — the dominant risk topic of the past five years — as the primary focus of regulatory scrutiny. The shift responds to three years of massive data breaches, cyberattacks on non-financial systems, and operational failures of technology providers with cascading impacts. Compliance specialist Rebeca Vergara Goana warns that "AI washing" — slapping an AI label on legacy automation — has become more legally dangerous than greenwashing, as small and mid-sized organizations now face regulatory requirements previously applied only to large corporations. For organizations deploying civic AI: the governance architecture is no longer optional. It is the audit trail.

Source: Corporate Compliance Insights →

4.8 Million Unfilled Cybersecurity Roles. 70% of Organizations Say the Skills Gap Is Increasing Their Risk.

Fortinet's 2024 Global Cybersecurity Skills Gap Report documents 4.8 million unfilled cybersecurity roles globally — with 70% of organizations reporting that the shortage is actively increasing their security risk, and 87% experiencing breaches they partially attribute to skills gaps. The workforce must grow substantially to meet demand. The talent is not missing. The door is. DJMP's Fortinet credentialing pipeline — FCF through FCX, five certification levels, free exam vouchers — opens to Chicagoland high school students and adults in April 2026. Pathways to $80K+ starting salaries, delivered at no cost.

Cybersecurity Program →

Fortinet Is On Track to Train 1 Million People in Cybersecurity by End of 2026. Over Half a Million Already Trained.

Fortinet's 2021 pledge to train 1 million people globally in cybersecurity by the end of 2026 is on track — with more than 500,000 individuals already trained through the Fortinet Training Institute. The 2026 restructured certification program now emphasizes practical skills and operational expertise aligned with real-world job roles. DJMP is one of 863 official Fortinet Academic Partners worldwide. Employer data is clear: 89% of organizations would pay for an employee to obtain a Fortinet certification. DJMP eliminates that cost entirely for the communities that need it most.

Cybersecurity Program →

AI Is Restructuring the Cybersecurity Workforce. New Roles in Model Evaluation and AI Security Are Emerging Now.

AI is automating repetitive threat detection while creating new roles in model evaluation, AI orchestration, and AI security architecture. The IAPP notes that in 2026 "the laws with teeth are the ones already in use every day" — privacy, civil rights, and consumer protection frameworks are being applied to AI systems right now. The competitive advantage belongs to organizations that reskill their communities before the transition is complete. DJMP's Fortinet curriculum combined with PathwayAI's agentic architecture is built for exactly this moment.

Full Newsroom →
Week of March 30, 2026

4.8 Million Unfilled Cybersecurity Roles. 70% of Organizations Say the Skills Gap Is Increasing Their Risk.

Fortinet's 2024 Global Cybersecurity Skills Gap Report documents 4.8 million unfilled cybersecurity roles globally — with 70% of organizations reporting the shortage is actively increasing their security risk. DJMP's Fortinet credentialing pipeline — FCF through FCX, five certification levels, free exam vouchers — opens to Chicagoland students and adults in April 2026.

Cybersecurity Program →

Fortinet Is On Track to Train 1 Million People in Cybersecurity by End of 2026. Over Half a Million Already Trained.

Fortinet's pledge to train 1 million people globally in cybersecurity by end of 2026 is on track — with more than 500,000 individuals already trained. DJMP is one of 863 official Fortinet Academic Partners worldwide. 89% of employers would pay for an employee to obtain a Fortinet certification. DJMP eliminates that cost entirely.

Cybersecurity Program →
Week of March 24, 2026

ISC2: 4.8 Million Unfilled Cybersecurity Roles. Skills Gaps Cost $5.22M Per Breach.

The global cybersecurity workforce gap stands at 4.8 million — up 40% in two years. The workforce must grow by 87% to meet demand. 90% of cybersecurity teams report missing expertise. The communities DJMP serves are not observers of this gap — they are the most prepared to fill it when given access.

Cybersecurity Program →

AI Is Compressing Some Cyber Roles and Expanding Others. Organizations That Reskill Now Win.

CyberScoop reports that AI is restructuring the cybersecurity workforce — automating repetitive threat detection while creating new roles in model evaluation, orchestration, and AI security. DJMP's Fortinet + AI curriculum is built for exactly this transition.

Full Newsroom →
Intelligence updated weekly · Week of April 6, 2026 · Cybersecurity Program →