The same TCAF-governed agentic architecture that powers PathwayAI delivers two things: a paid grant subscription that tells you how to frame, position, and submit your strongest possible application, and a free weekly intelligence feed tracking the four pillars every equity-focused organization needs to understand.
What you are watching in the ticker above is DJMP's agentic intelligence workflow — TCAF-governed, updated every Monday. The same architecture powers PathwayAI and this intelligence feed.
What you are watching is DJMP's agentic intelligence workflow — TCAF-governed, updated every Monday.
Google finds the grant.
We tell you how to apply for it strategically.
That’s not the same thing.
Active grants tracked per domain · Updated every Monday · Subscribe for full listings
The AI feed is always free. The grant intelligence is for any AI equity, workforce development, or civic technology nonprofit that needs to find and win funding — written by a team that has applied and won in exactly this space.
LinkedIn's Future of Work Fund provides financial grants and in-kind support for nonprofits preparing young adults for AI-driven careers through AI literacy, job access, workforce success, and system-level innovation. Specifically prioritizes organizations supporting career starters overcoming barriers to economic opportunity.
Every week DJMP's research team scans Grants.gov, NSF, NIH, DOL, Candid, Ford Foundation, Gates Foundation, AWS Nonprofits, Google.org, Salesforce.org, and LinkedIn — filtered through the DJMP mission. Each grant is now tracked on two tracks: grants DJMP leads, and grants worth knowing even when DJMP is ineligible as lead.
Choose a topic above to begin reading.
Six curated intelligence feeds — DJMP News, AI Governance, Agentic AI, AI Ethics, AI Equity, and Cybersecurity — updated weekly and powered by DJMP's agentic research architecture.
Published today, AI News reports that autonomous AI systems are no longer just generating answers — they are planning tasks, making decisions, and executing actions with limited human input. The question has shifted from "does the model give the right answer?" to "what happens when the model is allowed to act?" Governance frameworks that were never designed for autonomous execution are breaking under the weight of that question. TCAF was built to answer it — secure, ethical, equitable, and scalable by architectural design, before the first line of code. Not retrofitted after deployment fails.
McKinsey's 2026 AI Trust Maturity Survey — 500 organizations across industries and regions — finds that average responsible AI maturity has improved to 2.3 out of 4, but only about one-third of organizations have achieved mature governance and agentic AI controls. The governance gap is not isolated — it is globally consistent across every region surveyed. Organizations investing $25M or more in responsible AI show significantly higher maturity and are far more likely to realize measurable business impact. The data is unambiguous: governance built into architecture from the start is a competitive advantage, not a compliance burden. TCAF is that architecture.
Source: McKinsey →Singapore's Infocomm Media Development Authority released the Model AI Governance Framework for Agentic AI — the first framework in the world to address the specific risks of autonomous AI systems. It recommends that organizations define agent boundaries, implement agentic guardrails, maintain meaningful human control, and build accountability into the full agent lifecycle — not appended after deployment. Resaro's co-CEO called it "the first authoritative resource addressing the specific risks of agentic AI." TCAF operationalizes all four of these properties as design requirements — built into PathwayAI's architecture before any national government had published a standard requiring it.
Source: Singapore IMDA →Amazon retired the AWS Certified Machine Learning – Specialty credential — the certification that defined AI validation for a generation. The reason: the field has moved from building models to deploying autonomous agents. TCAF fills the governance gap AWS just acknowledged.
Source: AWS →Chatham House warns international AI governance is in structural deadlock — geopolitical fragmentation and institutional weakness make global coordination "close to impossible." Communities cannot wait for consensus. They need governance built into architecture now.
Source: Chatham House →The EU AI Act's high-risk AI provisions entered active enforcement — with penalties reaching €30 million or 6% of global revenue. U.S. state-level equivalents advancing in Colorado, California, and Illinois. Organizations that designed governance into their architecture from the first line of code are not scrambling to retrofit compliance.
Source: EU AI Act →NIST's updated AI RMF and its 2026 generative AI companion explicitly call out agentic systems as requiring governance frameworks embedded in design — not appended after deployment. TCAF operationalizes exactly these properties as design requirements. PathwayAI was built to this standard two years before the framework was updated to require it.
Source: NIST AI RMF →Deloitte Insights reports that agentic AI enables government services to move beyond digital forms toward fully customized, proactive service delivery — systems that match individual needs to the right services, securely access data across agencies, and guide users through end-to-end journeys. Agent-to-agent coordination (MIT Media Lab's Project NANDA) will allow personal AI agents to work directly with government agents to execute tasks like "register my business" or "pay my tax bill." Estonia's Bürokratt and Abu Dhabi's TAMM platform are already doing this at scale. PathwayAI has been doing it in Bronzeville for two years. The architecture was right. The world just caught up.
Source: Deloitte Insights →A new analysis in The Week flags "agent washing" as 2026's most consequential governance risk — legacy automation tools with conversational interfaces being marketed as autonomous agentic AI. These systems perform in demos but collapse in real-world complexity. The deeper problem: when organizations give excessive discretion to badly governed systems, errors cascade through interconnected processes. Governance is not a compliance layer added after deployment. It is a design decision made at the architecture stage. TCAF operationalizes exactly this — and PathwayAI is the proof of concept that it works in production, not just in a lab.
Source: The Week →Gartner projects the agentic AI market — valued at $9 billion in 2025 — will reach $139 billion by 2034 at a 35% compound annual growth rate. Yet while 40% of enterprise applications will embed AI agents by year-end, Deloitte's data still shows only 11% of organizations have agentic systems in production today. The gap between pilot and production is 2026's defining challenge. PathwayAI's five-agent agentic architecture connects Chicagoland residents to federal workforce resources — TCAF-governed, built simultaneously with the framework that evaluates it. That is not a pilot. That is the standard.
Explore PathwayAI →This month: Oracle announced 22 production-ready enterprise Fusion Agentic Applications handling supply chain, procurement, and financial operations autonomously. Salesforce deployed Agentforce at scale. NVIDIA's Jensen Huang declared that "employees will be supercharged by teams of frontier, specialized and custom-built agents they deploy and manage." The enterprise software industry is being redesigned for autonomous execution. DJMP's architecture was already there. The same agentic design that powers PathwayAI is the architecture every major enterprise is now racing to replicate.
Meet the Builder →Gartner's latest forecast warns that 40% of agentic AI projects will fail by 2027 — not because the models don't work, but because organizations cannot operationalize them. Legacy system integration, governance gaps, and the absence of architecture-first thinking are the killers. TCAF was built to solve exactly this problem. Secure by Architecture. Ethical by Design. Equitable by Intent. The governance framework built before the first line of code — not bolted on after deployment fails.
Our Programs →Gartner projects the agentic AI market — valued at $9 billion in 2025 — will reach $139 billion by 2034 at a 35% compound annual growth rate. Yet while 40% of enterprise applications will embed AI agents by year-end, only 11% of organizations have agentic systems in production today. The gap between pilot and production is 2026's defining challenge. PathwayAI's five-agent agentic architecture connects Chicagoland residents to federal workforce resources — TCAF-governed, built simultaneously with the framework that evaluates it.
Explore PathwayAI →Gartner warns 40% of agentic AI projects will fail by 2027 — not because models don't work, but because organizations cannot operationalize them. Legacy integration, governance gaps, and the absence of architecture-first thinking are the killers. TCAF was built to solve exactly this problem.
Our Programs →Deloitte's 2026 Emerging Technology Trends report finds the gap between pilot and production is 2026's defining challenge. McKinsey finds high-performing organizations are 3x more likely to have scaled agents — the differentiator is not the model, it is the willingness to redesign workflows around agent-first thinking.
Explore PathwayAI →The World Economic Forum reports Singapore, Barcelona, Estonia, and the UK are deploying agentic AI into public services. The WEF calls this the "hybrid workforce" era — AI handles transactional work; humans retain ethical authority. DJMP's TCAF was designed for exactly this architecture.
Meet the Builder →The CEO Alliance for Mental Health — representing major behavioral health organizations — declared 2026 priorities around AI: "Ethical Stewardship and Protection" requires AI to be "ethical by design," with proactive safeguards for privacy, safety, and consumer rights built in from the start. The Alliance calls for AI to reduce disparities in care access, not reproduce them — and explicitly warns against allowing AI to replace the essential human element of mental health treatment. TCAF's Ethical by Design pillar was built for exactly this deployment context — systems used in crisis routing, suicide risk assessment, and behavioral health navigation, where the consequences of governance failure are measured in lives.
Source: CEO Alliance for Mental Health →ISACA's April 1 analysis documents that AI chatbots have been linked to the deaths of multiple people — some of them teenagers — and that low-paid content moderation workers are suffering PTSD from training data exposure, yet are barred by NDAs from discussing it with mental health professionals. Laws and regulations do not address many of these outcomes. ISACA concludes that enterprises must proactively anticipate the ethical consequences of AI systems before deployment. This is the architecture argument TCAF has been making since its first draft: ethics built in at the design stage is not optional — it is the only way to prevent harm that cannot be undone.
Source: ISACA →The Eliminating Bias in Algorithmic Systems (BIAS) Act — reintroduced in January and still advancing — would require every federal agency that uses, funds, or oversees AI to establish a dedicated Office of Civil Rights focused on algorithmic accountability. The National Urban League called it "long overdue," citing how opaque algorithms "reinforce systemic inequities that disproportionately harm Black, Brown, Indigenous, and immigrant communities." Algorithms already discriminate in hiring, housing, credit, and criminal justice. TCAF's "Ethical by Design" pillar is the architectural response — built before deployment, not mandated after harm is documented.
Source: Rep. Summer Lee →Brookings defines algorithmic exclusion as a structural harm distinct from bias — when AI systems lack enough data on certain individuals to return any output at all. The populations most affected: low-income Americans, new immigrants, survivors of domestic violence, and the digitally disconnected. These are not data gaps. They are "data deserts" — zones where AI cannot function, and where the same economic forces that marginalize people offline also erase them from training data. TCAF was built on exactly this insight before the report was written. PathwayAI connects those populations to federal workforce resources in real time — not because the data was clean, but because the architecture was designed to see them.
Source: Brookings Institution →OpenAI Foundation pledged $1 billion in grants targeting AI's impact on jobs, the economy, and children's mental health. Round 2 of the People-First AI Fund is expected. DJMP's mission is direct alignment — communities leading AI design, not just consuming it. Subscribe for full application analysis and deadlines.
Subscribe for Full Analysis →The Eliminating Bias in Algorithmic Systems (BIAS) Act — reintroduced in January and still advancing — would require every federal agency that uses AI to establish a dedicated Office of Civil Rights focused on algorithmic accountability. Algorithms already discriminate in hiring, housing, credit, and criminal justice. TCAF's "Ethical by Design" pillar is the architectural response.
Source: Rep. Summer Lee →Brookings defines algorithmic exclusion as a structural harm — when AI systems lack enough data on certain individuals to return any output at all. These are "data deserts" — zones where AI cannot function, and where the same forces that marginalize people offline also erase them from training data. TCAF was built on this insight. PathwayAI was designed to see them.
Source: Brookings Institution →Brookings released a major policy proposal arguing that when AI systems fail to generate outputs for certain populations due to missing data — not bias, but invisibility — that is a structural harm requiring regulatory remedy. The populations most affected: low-income Americans, new immigrants, and the digitally disconnected.
TCAF Framework →Emerging U.S. state laws and the EU AI Act create compliance mandates with real penalties — up to $500K per violation. Organizations that designed ethics into their architecture from the start are the ones positioned to lead.
Our Mission →In March 2026, the World Health Organization and TU Delft's Delft Digital Ethics Centre convened over 30 international researchers, policymakers, clinicians, and advocates to establish priorities for responsible AI in mental health. WHO's Director of Data and Digital Health stated: "As AI increasingly interacts with people in moments of emotional vulnerability, these systems must be designed and governed with safety, accountability and human well-being at their core." WHO is now establishing a global Consortium of Collaborating Centres on AI for Health to ensure AI governance in health is grounded in evidence, ethics, and the needs of diverse populations. TCAF's Equitable by Intent pillar was built for exactly this design requirement — the communities most vulnerable to AI failure must be the ones most involved in shaping it.
Source: World Health Organization →The Trump administration canceled the entire $2.75 billion Digital Equity Act grant program — calling it a "DEI program" — overnight, states report. Over 20 states have filed federal lawsuits. Vermont's broadband director said the cancellation threatens workforce development and cyber-crime response. The National Skills Coalition reports the 2026 budget also proposes collapsing WIOA Adult, Youth, and Dislocated Worker programs into a single block grant and cutting Tribal Broadband Connectivity funding from $988 million to $24 million. As federal investment contracts, community-led AI infrastructure becomes more essential — not less. PathwayAI is already doing the work this funding was meant to support. We are not waiting for policy permission.
Source: National Skills Coalition →One African-American female passed. This is the documented, measured, structural failure that drives every program DJMP delivers. Not an abstract equity statement — a specific data point that became a founding mission. Summer STEM 2026: 30 students. 6 labs. 4 weeks. Bronzeville. Robotics · Cybersecurity · AI · Game Design · Aviation · FLL. Free for every student.
The Mission →The Department of Labor's Employment and Training Administration issued guidance allowing states to use WIOA funding for AI literacy programs — calling AI literacy "the gateway to opportunity." Simultaneously, the administration is proposing to consolidate and cut the very WIOA programs that would fund it. The federal government is telling states to build AI literacy and removing the money to do it. DJMP's PathwayAI and Fortinet pipeline fill exactly this gap — no federal permission required.
Source: Government Technology →The Trump administration canceled the entire $2.75 billion Digital Equity Act grant program overnight. Over 20 states filed federal lawsuits. The National Skills Coalition reports the 2026 budget proposes collapsing WIOA Adult, Youth, and Dislocated Worker programs into a single block grant. PathwayAI is already doing the work this funding was meant to support.
Source: National Skills Coalition →The Department of Labor issued guidance allowing states to use WIOA funding for AI literacy programs — calling AI literacy "the gateway to opportunity." Simultaneously, the administration is proposing to cut the very WIOA programs that would fund it. DJMP's PathwayAI and Fortinet pipeline fill exactly this gap.
Source: Government Technology →The National Skills Coalition reports the 2026 federal budget proposes collapsing WIOA Adult, Youth, and Dislocated Worker programs into a single block grant while freezing $1.44 billion in Digital Equity Act funding. PathwayAI is already doing the work this funding was meant to support.
PathwayAI in Action →OpenAI Foundation pledged $1 billion in grants targeting AI's impact on workforce, equity, and children's mental health. The People-First AI Fund Round 2 is expected to open. DJMP's mission is exactly what these funders are looking for.
Subscribe for Full Analysis →New research finds 92% of security professionals are concerned about the security impact of AI agents operating across their organizations — with 44% citing AI agents accessing sensitive data as their top risk, 36% warning malicious prompts could compromise security, and nearly a third admitting they lack the observability or auditability to intervene once agents are deployed. Meanwhile, 77% of organizations now run generative AI in their security stack, but only 37% have a formal AI policy. The gap between deployment speed and governance is widening year over year. DJMP's TCAF addresses this directly — "Secure by Architecture" means governance is built into the agent before it is deployed, not discovered as a gap after it operates.
Source: Security MEA →The SEC's 2026 examination priorities signal a historic shift: concerns about cybersecurity and AI have displaced cryptocurrency — the dominant risk topic of the past five years — as the primary focus of regulatory scrutiny. The shift responds to three years of massive data breaches, cyberattacks on non-financial systems, and operational failures of technology providers with cascading impacts. Compliance specialist Rebeca Vergara Goana warns that "AI washing" — slapping an AI label on legacy automation — has become more legally dangerous than greenwashing, as small and mid-sized organizations now face regulatory requirements previously applied only to large corporations. For organizations deploying civic AI: the governance architecture is no longer optional. It is the audit trail.
Source: Corporate Compliance Insights →Fortinet's 2024 Global Cybersecurity Skills Gap Report documents 4.8 million unfilled cybersecurity roles globally — with 70% of organizations reporting that the shortage is actively increasing their security risk, and 87% experiencing breaches they partially attribute to skills gaps. The workforce must grow substantially to meet demand. The talent is not missing. The door is. DJMP's Fortinet credentialing pipeline — FCF through FCX, five certification levels, free exam vouchers — opens to Chicagoland high school students and adults in April 2026. Pathways to $80K+ starting salaries, delivered at no cost.
Cybersecurity Program →Fortinet's 2021 pledge to train 1 million people globally in cybersecurity by the end of 2026 is on track — with more than 500,000 individuals already trained through the Fortinet Training Institute. The 2026 restructured certification program now emphasizes practical skills and operational expertise aligned with real-world job roles. DJMP is one of 863 official Fortinet Academic Partners worldwide. Employer data is clear: 89% of organizations would pay for an employee to obtain a Fortinet certification. DJMP eliminates that cost entirely for the communities that need it most.
Cybersecurity Program →AI is automating repetitive threat detection while creating new roles in model evaluation, AI orchestration, and AI security architecture. The IAPP notes that in 2026 "the laws with teeth are the ones already in use every day" — privacy, civil rights, and consumer protection frameworks are being applied to AI systems right now. The competitive advantage belongs to organizations that reskill their communities before the transition is complete. DJMP's Fortinet curriculum combined with PathwayAI's agentic architecture is built for exactly this moment.
Full Newsroom →Fortinet's 2024 Global Cybersecurity Skills Gap Report documents 4.8 million unfilled cybersecurity roles globally — with 70% of organizations reporting the shortage is actively increasing their security risk. DJMP's Fortinet credentialing pipeline — FCF through FCX, five certification levels, free exam vouchers — opens to Chicagoland students and adults in April 2026.
Cybersecurity Program →Fortinet's pledge to train 1 million people globally in cybersecurity by end of 2026 is on track — with more than 500,000 individuals already trained. DJMP is one of 863 official Fortinet Academic Partners worldwide. 89% of employers would pay for an employee to obtain a Fortinet certification. DJMP eliminates that cost entirely.
Cybersecurity Program →The global cybersecurity workforce gap stands at 4.8 million — up 40% in two years. The workforce must grow by 87% to meet demand. 90% of cybersecurity teams report missing expertise. The communities DJMP serves are not observers of this gap — they are the most prepared to fill it when given access.
Cybersecurity Program →CyberScoop reports that AI is restructuring the cybersecurity workforce — automating repetitive threat detection while creating new roles in model evaluation, orchestration, and AI security. DJMP's Fortinet + AI curriculum is built for exactly this transition.
Full Newsroom →