Stablecoins strengthen the dollar and empower the developing world
This article was originally published in Cointelegraph.
Stablecoins received a real boost when US President Donald Trump signed the GENIUS Act earlier this year — and now European banks are trying to get into the act by issuing stablecoins of their own.
Their envy of the US dollar’s supremacy, a long-standing pillar of American economic strength, is understandable. In the wake of the GENIUS Act, dollar-backed, privately issued stablecoins are surging in popularity, presenting a strategic opportunity for the United States.
By creating an environment that enables stablecoins and operating under the umbrella of US banking infrastructure, the US can reinforce the dollar’s global dominance while democratizing access to finance abroad, particularly in developing countries.
These “digital dollars” have numerous benefits. They can cut fees, shorten settlement cycles, counter local inflation and widen access to trade and finance for smaller companies that struggle with correspondent banking.
The stablecoin surge
Stablecoins have surged in market capitalization, with transactions exceeding $265 billion. Nearly all of that value rides on dollars. Safe assets back each dollar stablecoin, so stablecoin issuers must hold large reserves of US dollars and Treasury bills. Stablecoin reserve demand shifts Treasury bill ownership from bank deposits and money market funds to issuers; the larger ripple effects would arise if this infrastructure facilitates more commerce.
Federal Reserve Governor Christopher Waller noted that if regulators “allow these things to go out, this will only strengthen the dollar as a reserve currency,” since greater stablecoin use means higher demand for dollars and US debt. Secretary Scott Bessent has been even more blunt: “We are going to keep the US [dollar] the dominant reserve currency in the world, and we will use stablecoins to do that.”
Stablecoins and the developing world
For developing countries, integrating with the dollar via stablecoins can unlock sorely needed economic activity. Many of these nations suffer from volatile currencies, high inflation and patchy banking systems. Their citizens often seek refuge in dollars — a phenomenon economists call “dollarization” — but until now, that meant physical cash or costly wire transfers.
Stablecoins change the game by making dollars accessible to anyone with a cell phone. Instead of waiting at a bank and paying high exchange fees, a farmer or shopkeeper can instantly hold digital dollars in a smartphone wallet. Stablecoins are making the world’s most in-demand asset – the US dollar – available on demand, globally.
This has profound implications for financial inclusion. Approximately 1.4 billion adults worldwide remain unbanked, with a substantial proportion residing in Africa and Asia. Stablecoins enable users to save in a stable currency and transact globally without a bank account, thereby bypassing traditional barriers such as ID checks and branch access.
Financial inclusion through stablecoins
In Sub-Saharan Africa, for instance, dollar stablecoins have become a vital tool for payments, savings and commerce amid currency instability. Over 40% of all cryptocurrency transaction volume in Africa is now in stablecoins. Users are even willing to pay a premium for stablecoins; businesses and individuals in emerging markets sometimes pay 5% or more above face value just to obtain digital dollars, which demonstrates their desperate need for a reliable store of value.
Crucially, stablecoins also facilitate commerce. Consider the example of remittances — the lifeblood of many developing economies. Africans abroad sent home $54 billion in remittances in 2023, but traditional channels charge senders an average of nearly 8% in fees. Stablecoins can slash these costs.
In one Kenyan pilot, using stablecoins for cross-border micropayments reduced fees from 28.8% to just 2%, allowing gig workers to keep more of their earnings. Global consultants estimate that over $12 billion a year could be saved in remittance fees if stablecoins replaced wire transfers — money that goes straight into local households and consumption.
Where local banks perceive too much risk or too little profit to lend, stablecoin-based financing and decentralized finance can help fill the credit gap, playing a vital role in facilitating entrepreneurship and growth for African small and medium-sized enterprises.
Stablecoins and their superpowers
Wider adoption of stablecoins in developing countries could also counter the influence of players like China, which has spent years extending loans to poorer nations under onerous terms. As part of the Belt and Road Initiative, Beijing’s overseas lending has left dozens of countries saddled with debts they struggle to repay. In extreme cases, defaulting nations have had to relinquish strategic assets, such as ports and power plants, to Chinese control.
This “debt-trap diplomacy” thrives when nations lack alternative financing options.
By embracing dollar stablecoins and digital finance more broadly, developing countries can raise capital in new ways and unshackle themselves from such predatory arrangements.
Another promising path is tokenizing sovereign debt. Rather than relying exclusively on large foreign creditors, governments can issue bonds in smaller denominations on blockchain platforms, making it easier for local citizens and diaspora investors to participate.
Governments from Kenya to Brazil are already exploring tokenized bonds and Treasury bills that can be purchased and traded via digital wallets. Such decentralized fundraising could help countries refinance or buy back expensive foreign loans — effectively crowd-funding their way out of China’s shadow. Every dollar raised from a diaspora bond or global crypto investor is a dollar that doesn’t have to be borrowed from Beijing on tough terms.
CBDCs in the corner
Central banks have also spotted these opportunities. Dozens of central banks are developing central bank digital currencies (CBDCs) as state-controlled alternatives to private stablecoins. Proponents argue that a government-issued digital currency can increase financial inclusion and modernize payments, but the early evidence is underwhelming.
Nigeria’s eNaira, one of the first retail CBDCs, has flopped – 98% of Nigerians who opened eNaira wallets stopped using them by the end of 2023. Meanwhile, Nigerians continue to flock to dollar-backed stablecoins as a hedge against the plunging naira. This story repeats elsewhere: Enthusiasm for CBDCs often comes from the top down, while stablecoins gain adoption bottom up by meeting real user needs. Even China has had limited success getting other countries to use it, especially when dollar stablecoins already have a considerable head start globally.
Academic research suggests that when central bankers promote CBDC plans, stablecoin activity drops — evidence that rhetoric alone can siphon momentum from the private sector. That might please officials wary of competition, but it can deprive consumers of better services.
Moreover, research compares countries that have adopted CBDCs with those that have not, both before and after adoption, finding that there are no effects on macroeconomic outcomes, such as GDP per capita or inflation, and adverse effects on financial well-being. In short, CBDCs have yet to deliver breakthrough improvements in financial access or efficiency, whereas stablecoins are already doing so.
Encouraging developing countries to use dollar-backed stablecoins is a win-win proposition, functioning similarly to the printed dollar following the supremacy of gold. For the US, it means expanding the influence of the dollar — reinforcing its reserve currency status in the digital era and countering rivals who seek to promote alternative spheres of monetary control.
For developing nations, it means greater access to a stable currency, new pathways for investment, lower transaction costs, and escape hatches from heavy-handed creditors. In an increasingly tense geoeconomic landscape, digital dollars could become a linchpin of a more democratic and resilient global financial system.
The United States is embracing this opportunity: By championing dollar stablecoins and the open financial networks they run on, America can help unlock growth in emerging economies while buttressing its own economic might.
In the contest for hearts, minds and wallets around the world, a little stable currency could go a long way.
The Flexibility Trap: Remote Work’s Hidden Toll on Young Adults
This article was originally published in the Institute of Family Studies.
The rise of remote and hybrid work has reshaped the workweek, especially for younger professionals, according to a recent working paper of mine. Using the American Time Use Survey, I found that Fridays have effectively become the new weekend for remote-capable workers, especially those without children.
From 2019 to 2024, average minutes worked on Fridays fell by about 90 minutes in jobs that can be done from home, whereas on-site jobs saw little change. This pronounced “Friday effect” is largest among younger workers and those without children. One analysis finds that childless employees cut nearly two hours off their Friday work time (from ~542 minutes in 2019 to 427 in 2024), compared to only a half-hour drop for workers with kids. In other words, many 20- and 30-somethings with remote setups are seizing the chance to log off early, especially at week’s end, enjoying a taste of the weekend before it even starts.
The newfound flexibility goes beyond Fridays. Low-coordination, remote-intensive jobs—roles where tasks do not require constant real-time teamwork— have seen a broad decline in hours worked per day. By 2022–2023, employees in such positions were working 65–92 fewer minutes per day than in 2019, allocating a comparable 65–92 minutes more to leisure instead. This suggests that many young remote workers are using the time saved from commuting or the slack of lighter supervision to relax or attend to personal activities. Prior work of mine has also shown that most of the cutback in work time has flowed straight into leisure (with only a small portion diverted to extra household chores). In effect, the short-term benefit of remote work for these individuals is clear: more free time and autonomy in managing their day.
Less Work, More Leisure — But More Time Alone
What are young adults doing with their newfound “free” time? A closer look reveals a concerning pattern: the extra leisure tends to be solitary. Among remote-heavy workers ages 30-33, the share of leisure time spent with others has declined from 70% to 55%, relative to 2019. In contrast, those in jobs requiring in-person presence saw no significant change—and they still spend the bulk of their leisure hours with friends or family.
Why might this be happening? One reason is asynchronous schedules. A 28-year-old who signs off early Friday afternoon may find that many friends or family are still working or are geographically scattered. The once-common ritual of coworkers grabbing happy-hour drinks or young professionals unwinding together at week’s end has become rarer for those fully remote. In its place might be a Netflix session, solo gym visit, or just additional screen time at home. What at first sounds like work–life balance—trading an hour of work for an hour of “me time”—can morph into social isolation if that leisure time isn’t shared with others.
Isolation and Young Adult Well-Being
Emerging data on the well-being of young adults raise red flags about this isolation. Even before remote work became widespread, researchers noted a decline in the mental and social health of Millennials and Gen Z. The expansive Global Flourishing Study finds that in many countries—including the United States—mental health is a significant “flourishing deficit” for young adults. In the U.S., for instance, the average self-reported mental health score is only about 5.7 (out of 10) among 18–29-year-olds, versus 8.1 for those in their 60s. In 2025, around 83% of young adults said they had experienced feelings of depression in the past two weeks, a rate nearly two-and-a-half times that of senior citizens. Similarly, about 34% of young adults report feeling lonely frequently, far higher than older groups.
Gallup data likewise show that globally 20% of employees feel lonely, with younger and fully remote workers feeling it most. In fact, Gallup’s workplace research calls Gen Z the “loneliest generation”: Gen Z employees are almost three times as likely as Baby Boomers to say they experienced “a lot” of loneliness the previous day. This loneliness has a direct connection to remote work preferences. Notably, Gen Z is the least interested in fully remote roles, with many young workers actively craving more in-person interaction on the job to combat isolation.
All of this suggests that social connection is not a luxury but a lifeline for flourishing in early adulthood. Psychologists have long known that strong relationships are critical for mental health and life satisfaction. The Global Flourishing data further reinforce this: those in romantic partnerships score significantly higher on well-being than those who are single, largely thanks to the support and sense of belonging that close relationships provide. Conversely, when young people feel adrift or alone, their overall life satisfaction and purpose tend to falter. If remote work arrangements inadvertently lead young adults to spend more time alone, they may amplify exactly those conditions—isolation and disconnection—that feed into poorer mental health.
The consequences of prolonged social isolation in one’s 20s and 30s extend beyond immediate mental health. These years are a formative period for building the foundations of adult life—careers, skills, friendships, and families. A fully remote or highly isolated work style can subtly undermine these developmental milestones in several ways:
Loss of “Social Scaffolding” at Work. In traditional offices, young employees benefit from informal interactions that help them grow. Think of chatting with a senior colleague who offers career advice, or observing how seasoned professionals handle challenges. In a remote setting, much of this osmosis is lost. Unlike older workers who already have established networks and confidence, newcomers depend on guidance and encouragement from managers and mentors. Over time, this could slow their career development and even sap their engagement.
Fewer Pathways to Meet Partners. Young adulthood is also when people tend to form long-term romantic relationships. Historically, the workplace has been one common venue to meet a significant other. In the 1980s, nearly 1 in 5 couples in the U.S. met through work. Today, that figure is just around 10%, a decline driven partly by the rise of online dating but likely to be exacerbated by remote work. Fewer days in the office mean fewer casual conversations that spark a friendship or more. Of course, meeting a partner is increasingly moving to dating apps and social media, but those digital avenues do not fully replace the trust and context that can come from getting to know someone in person over time.
Weakening Community Ties. Beyond the workplace, a fully remote lifestyle can encourage geographical drifting. On one hand, remote jobs enable young adults to move anywhere—often away from hometowns or high-cost cities— which can be positive for affordability. On the other hand, this freedom can also mean that young workers end up living in new locales where they have no built-in community. They might find themselves working from a small apartment in a city where they do not know anyone, or bouncing to a new location every year. The decline in daily in-person interaction may not immediately register as a problem—until one day, the remote worker looks up and realizes he hasn't felt truly known or supported by a community in a long time.
Counting the Cost of Flexibility
None of this is to suggest that we should revert to the 9-to-5, five-days-in-office grind of old. Remote and hybrid work offer genuine benefits: greater flexibility for family needs, less time wasted on commuting, the ability to hire and work from anywhere, and often higher productivity for focused tasks. For example, many parents, especially mothers, who would otherwise have to drop out of the labor force, are able to continue working, even if part-time. But as we embrace the convenience of remote work, we must also count its cost—not just in economic terms, but in human terms. The experiences of younger workers serve as an early warning. They are reporting record-high loneliness, declining mental health, and fewer close relationships.
How might today’s work-from-home norms reshape the social and emotional development of young adults in the long run? Will the 30-year-old of 2030 be more likely to struggle with anxiety and loneliness because they spent their formative work years isolated in a bedroom office? Could the trends we see now —later marriages, fewer friendships, weaker professional networks—accelerate under a regime of minimal in-person engagement? These are not just individual concerns, but societal ones. Fewer connections and lower flourishing among young adults can have ripple effects on community life, civic engagement, and family stability in the years ahead.
To avoid drifting into a future where flexibility comes at the expense of fulfillment, employers and families alike should take these trade-offs seriously, experimenting with initiatives like:
Establishing “in-office anchor days,” targeting especially younger staff for mentoring sessions and team-building activities.
Crafting “team charters” that ensure hybrid work still includes synchronous collaboration and social interaction, thereby mitigating burnout.
Formalizing mentorship opportunities that pair junior and senior employees around specific projects and measurable goals.
Coordinating volunteer and civic engagement opportunities outside of the workplace to further build trust and camaraderie.
In the end, the measure of a life, or a career, well lived is not only the output we produce, but the relationships and meaning we cultivate along the way. Flexibility in work is a double-edged sword: it can optimize personal comfort and family life, yet also gradually erode the informal bonds and growth experiences that help young adults flourish. As we navigate this grand experiment in remote work, we would do well to remember that efficiency is not enrichment. Only by counting (and correcting for) the social costs of remote work can we ensure that the next generation thrives—not only at work but also in their personal lives.
Remote Staff Hours Fall, but Productivity Steady (For Now)
This article was originally published in Gallup News.
As remote work and hybrid work became mainstream in the wake of the pandemic, many leaders have asked these questions: Are remote workers really working? What does that mean for productivity?
The answer is nuanced. Remote workers are spending less time working, but the relationship between remote work and productivity is more complex.
Remote Employees Are Working Less
A recent study based on data from the American Time Use Survey (ATUS) from 2019 to 2023 found that full-time employees in remote-capable jobs are spending less time on work and more on personal activities.
By 2022, people in heavily remote roles were working about an hour less per day than in 2019, on average. Of that time, they were redirecting 30 to 60 minutes to leisure, a trend consistent with broader remote work productivity statistics. This decrease goes beyond reduced commute time, with the drop in commuting time accounting for only a small fraction of the reduction in work hours.
Some groups reported working even less. In jobs open to telework, men, unmarried adults, and those without children showed the steepest declines in hours worked and the greatest gains in leisure time. For example, single men over 45 who work remotely clocked over two hours less per day on work activities in 2022 than in 2019. Women saw even larger declines in hours worked, although that is driven largely by those without a college degree.
These findings align with more recent Gallup studies on hours worked and emerging remote work trends in the broader workforce. In 2019, U.S. employees reported working an average of 44.1 hours. In 2024, they averaged 42.9 hours per week.
Productivity Benefit of Remote Work: Increased Talent Pool
Perhaps the greatest concern in boardrooms about reduced work hours is the potential hit to productivity. If employees are working 10% fewer hours, will output or innovation fall by 10%?
Not necessarily.
After adjusting a model where employees choose jobs based on their capabilities and preferences, the study finds a slight increase in output per worker in the economy. This growth did not result from employees in more remote-intensive jobs working more productively. Instead, people were better able to sort into roles that were better suited for them, and employment shifted toward sectors with higher output per worker.
Organizations that broke free from geographic constraints and hired the best-fit talent for each role — regardless of location — experience a boost in remote work productivity. While the ATUS does not capture information on the quality of managers, prior Gallup data show that managers play a vital role in how technological change affects employees. That is, increases in technology tend to have positive effects on workers, but those effects are greater when managers build trust in the workplace. The bar for managerial quality is likely even higher for those leading remote employees. Knowing how to manage a remote team is now a core leadership competency. It affects how clearly organizations set expectations, manage performance and build trust.
Although the model addresses sector-level and overall productivity, the benefits for individual organizations are more complex. They depend not only on the types of tasks being done and how suitable they are for remote work, but also on the makeup of the talent pool. Growing evidence shows that being near coworkers can have positive spillovers for productivity, and the effect of working in the same location on communication depends on the type of work and whether the interaction is between employees or between employees and managers.
These factors are especially important for younger or newer employees who may not yet have established routines or communication patterns within the organization. For them, working in the same location as their coworkers could provide more benefits.
Remote Work Increases Job Satisfaction — If the Boss Is Bad
Even if the productivity benefits of remote work are mixed, employers might offer remote flexibility as a perk to attract and retain quality employees. Gallup data on hybrid work suggest as much: 76% of hybrid workers say “improved work-life balance” is one of the “greatest benefits” of hybrid work. For many, the ability to work from home offers greater autonomy and flexibility. That is consistent with several randomized experiments assessing the effects on retention, including those by Nicholas Bloom and coauthors. But companies need to recognize how fully remote strategies can go awry by attracting people who are less likely to put in discretionary effort.
Company culture, however, has a stronger influence on employees’ feelings about their workplace than location. Another recent study found that workplace factors — such as feeling appreciated and receiving clear communication, among other workplace practices — explain most of the differences in job satisfaction and intent to leave.
Meanwhile, remote work is linked to job satisfaction in the raw data. But that link disappears when accounting for the previously mentioned workplace factors, except for one group: employees who “sometimes work from home.” In other words, workers value some flexibility, but culture and management matter more.
The study also shows that the benefits of remote work are not the same for everyone — they vary based on the type of work and the quality of the manager. This suggests that remote work can feel like a benefit when management falls short, but it does not raise performance on its own.
In these cases, fully remote work arrangements may help individuals make the best of a bad situation, but that is a workaround, not an organizational strategy.
The Best Hybrid Work Model Focuses on Culture and Fit
The future of hybrid work and remote work is already here. What matters now is how much flexibility to allow, how it works in practice and how organizations manage the risks. These choices rest with managers and organizational leadership.
Build a strong workplace culture first. Most variation in job satisfaction and intent to leave comes from how employees view their workplace practices, not from compensation. Hybrid work can support engagement, but it cannot replace sound management. High-quality management remains a competitive advantage.
Assess your workforce and how work gets done. Remote work fits some tasks better than others. Organizations need to understand the factors that influence successful client outcomes and how these are evolving with the economy and technology. Use both remote work and on-site collaboration in ways that elevate performance.
Use remote work to expand your talent options and improve role fit. Productivity increases when people are doing work that suits their talents and strengths. Remote and hybrid work arrangements create more ways to match employees to the right tasks under a clear talent strategy.
Bottom Line
Less time spent working does not automatically mean lower output. If anything, the shift to hybrid and remote models has helped many organizations make better use of each employee’s talents. But the declining trend in time allocated to work, particularly among remote-capable employees, and the deteriorating employee engagement trend Gallup has documented for years indicate a broader risk. With this risk in mind, leaders need to ensure remote work flexibility strengthens — not erodes — long-term engagement and performance.
States’ pushback against ESG finance contains key lessons for powering AI
This article as originally published in The Hill.
Artificial intelligence is not only disrupting labor markets and reshaping geopolitics, but also becoming one of the most energy-intensive technologies of our time. In fact, it is placing an already overloaded power grid under even more strain.
Recent estimates from the International Energy Agency suggest that by 2026, global data centers are projected to more than double their electricity consumption, potentially exceeding 1,000 terawatt-hours. This will be driven not just by the energy needed operate and cool chips, but also by the vast computational demands of AI model training and deployment.
This surge in demand is occurring just as utilities across the U.S. are warning of capacity shortfalls. Worse, regulatory bottlenecks and climate targets over the last four years made it harder to bring new energy supply online. In short, we are not on track to meet the energy demands of an AI-driven economy.
What happens when national-scale ambitions conflict with local economic needs? One overlooked episode — centered on the backlash over state-level environmental, social, and governance, known widely as ESG — offers perspective.
ESG investing aimed to encourage corporate responsibility, but it also introduced new constraints on energy financing and infrastructure development. When major financial institutions began limiting support for oil, gas, and other politically disfavored sectors, some U.S. states responded with legislation barring public contracts with those firms.
Texas led the charge in 2021, passing laws that effectively pushed several of the country’s largest municipal bond underwriters, such as JPMorgan and Citigroup, out of the state’s market. Critics warned this would raise borrowing costs for local governments. But in a new working paper, we examined the actual financial impact using comprehensive data on bond yields between 2017 and 2024. We found that even in large and complex deals, where underwriting relationships matter most, the exit of ESG-sensitive firms did not significantly affect pricing.
Texas’s policy led to no systematic increase in borrowing costs. We also found that the same was true in Oklahoma after it adopted a similar policy in 2023.
What explains this null result, which runs counter to what some ESG proponents predicted and expected? Partly, it reflects long-term shifts in the structure of the municipal finance market. Underwriting spreads have declined over the last two decades. Competition has also intensified, and many states — especially those with zero income tax — retain robust investor demand.
But the deeper insight is this: When states push back against perceived overreach by firms or ratings agencies, markets often adjust. Other underwriters step in, investors recalibrate and life goes on.
This lesson matters because AI’s energy appetite is forcing a similar reckoning. National climate goals have prioritized decarbonization, often by sidelining traditional energy sources before viable alternatives are ready. Meanwhile, capital has flowed toward green technologies unable to cost-effectively scale — arguably at the expense of system reliability.
To train a single advanced AI model can consume several gigawatt-hours of electricity — roughly equivalent to powering hundreds of U.S. homes for a year. And inference costs (referring to the energy needed to run these models at scale) could dwarf training in the long run. Meeting this challenge will require not only new capacity, but also investments in energy efficiency and grid interoperability, so that power flows can dynamically match shifting demand across regions and time zones.
The National Energy Dominance Council, established by President Trump in February, is a pragmatic step to bridge the gap between national ambition and on-the-ground implementation. By coordinating states and private stakeholders, the council is working to accelerate permitting timelines, identify high-impact transmission corridors, and streamline regulatory processes that often slow energy infrastructure.
Its convening power has also facilitated more transparent negotiations between utilities and major corporate energy buyers, encouraging market-driven investments in renewables while reinforcing grid reliability. Rather than impose top-down mandates, the council is helping align incentives and remove bottlenecks that have historically stalled progress.
In this environment, states cannot afford to be passive. They must act to secure their energy futures both to meet AI-driven demand and to maintain economic competitiveness. Some are already doing so. Arkansas, for example, has launched efforts to fast-track natural gas permitting. Georgia is investing heavily in nuclear and grid modernization. And Texas, despite the fallout from its 2021 winter storm, remains the largest generator of wind power while also expanding natural gas generation to stabilize supply.
Critics may worry that such moves conflict with broader climate goals. But most would agree that the alternative — a brittle grid, rolling blackouts, and politically opaque AI rationing — is worse. Policymakers at all levels should recognize that reliability and capacity are not optional in a digital economy. Nor is energy neutrality a viable long-term position.
If ESG finance taught us anything, it’s that states can and do push back when national trends threaten their core interests. The market consequences of these interventions are not always dire. Sometimes, they are neutral. Occasionally, they even improve outcomes by correcting for one-size-fits-all approaches that misalign incentives.
With the energy transition entering a new phase, and AI accelerating that shift, now is the time to reexamine the balance of power between state priorities and federal, or corporate, initiatives. We cannot afford to sleepwalk into an AI-powered blackout.
AI in the Classroom: It Needs More Than Guardrails—It Needs Purpose
This article was originally published in the Institute of Family Studies.
Recent debates over artificial intelligence in schools have understandably zeroed in on the risks. Educators and parents worry about biased algorithms, invasive data practices, and other harms from rushed AI adoption. These concerns are well-founded. Research shows that AI systems can introduce risks and harms that extend beyond bias and discrimination in education.
But treating AI merely as a threat to contain addresses only half the equation: AI is here to stay, and how we manage its risks and design it to augment human capabilities is what matters most. In this sense, the relevant comparison is not between AI and a perfect world, but between AI and the status quo—one that already includes harms and failures by fallible humans.
Missing from the current conversation is an affirmative vision for what we want AI in education to achieve. Yes, guardrails are needed, but so is a guiding star. AI in education is not just an external force to be fenced in; it is also a tool whose impact will ultimately reflect the values and goals we build into it. The question, then, is not only “How do we prevent harm?” but also “How do we design AI to actively promote the well-being and growth of students?”
The success of AI in schools will depend on whether its implementation remains human-centered. This shift in framing—from damage control to purposeful design—opens the door to a more constructive approach. Rather than aiming for the absence of negatives, we should set our sights on the presence of positives: safer, healthier, more enriching learning experiences.
In a recent working paper, we introduce a framework called “Flourishing by Design” that builds on the Global Flourishing Study (GFS). The GFS was led by Baylor University and the Harvard Flourishing Program, in partnership with Gallup, and included a longitudinal survey of over 200,000 people across 22 countries along six dimensions of human flourishing. Building on this vein of human development and well-being research, our framework can be applied to learner flourishing and the role of AI. We contend that technology ethics should go beyond box-checking and “ethics washing,” embedding them into the very fabric of product and policy development, tied directly to multi-dimensional outcomes that matter for students’ lives.
Put differently, when companies—especially in the tech sector—build products or services, they need to think about the end-use and the impact on human flourishing from the start. If we had done that from the onset of the internet revolution, we would have setup property rights over data (instead of letting digital intermediaries extract and monetize our digital footprints), and created social media platforms that promote meaningful relationship building (instead of lead to hyper personalization and “keeping up with the Joneses” phenomena)
One clear area where AI could support flourishing is by cultivating intellectual tenacity, which refers to the willingness to engage with difficult problems, resist premature closure, and revise one’s beliefs when faced with new evidence. Current educational models often reward speed, correctness, and compliance over thoughtful perseverance. AI systems, if intentionally designed, could help reverse this trend. For example, rather than steering students toward the fastest path to the right answer, an AI tutor could detect when a learner is struggling productively and offer prompts that encourage deeper inquiry: “Would you like to explore why this approach didn’t work?” or “Try explaining your reasoning out loud before we move on.” Over time, such personalized nudges—combined with reflection tools and feedback loops—could reinforce habits of intellectual resilience and broader cognitive skills.
A flourishing-by-design approach would require that educational AI tools be evaluated and optimized against these broader outcomes—not just narrow performance metrics. For example, does an AI homework helper improve a student’s understanding and self-confidence? Does an AI tutoring system enhance learning without diminishing curiosity or creativity? These questions elevate flourishing as a core design and accountability principle—rather than treating student well-being as an afterthought or a lucky side-effect.
To be sure, technology is icing on the cake—it is not the main course. If the institutions that lay the foundation for our economy and society, especially family and faith-based organizations, were to deteriorate further, technology will not be a panacea. But it can be an amplifier.
Why propose a new framework when there are already so many, especially following the rise of corporate social responsibility (CSR), socially responsible investing (SRI), and more recently environmental, social and governance (ESG) frameworks? Because current approaches—from tech industry self-regulation to education-specific guidelines—have clear limitations.
While well-intentioned, past frameworks often devolve into check-the-box exercises. Nearly every major company now publishes ESG or “responsible AI” reports, yet tangible changes can be elusive and ratings agencies cannot even agree on what defines a credible ESG score. Compliance-driven frameworks tend to fixate on avoiding liabilities—ensuring an algorithm does not blatantly violate a law or embarrass the company—rather than on maximizing social benefit. They also often compartmentalize issues (privacy versus innovation, bias versus efficiency), instead of seeking solutions that advance multiple values. In education, for instance, debates often pose privacy and equity in opposition, implying a trade-off between protecting student data and using data to help at-risk learners.
But this trade-off can be overcome. For example, new privacy-preserving data practices, such as secure data-sharing via cryptographic techniques, allow schools and vendors to collaborate without exposing sensitive information. Although our paper spells out more detail—and further work is surely needed—the Flourishing by Design framework is abundantly transparent: does an organization make people better off along the six dimensions of human flourishing that are present within the GFS study?
The conversation about AI in schools is at a crossroads. Up until now, much of it has oscillated between excitement over AI’s promise and alarm over its perils. What’s needed instead is a unifying vision that channels the innovation toward what truly matters. A flourishing-based model provides that north star. It does not dismiss the real warnings sounded by critics, but rather demands even greater accountability for long-term, human-centered, measurable outcomes. It also urges educators, developers, and regulators to move beyond a defensive crouch. The goal should not be to merely tame AI’s disruptions, but to shape AI in education in such a way that it helps produce healthier, wiser, more fulfilled students.
Meeting this task will require effort: new design methodologies, cross-disciplinary input, updated policy tools, and the Global Flourishing Study and associated Flourishing by Design framework are a start. If we succeed, the narrative of AI in education could shift from one of narrowly averted harms to one of empowering transformation—technology that not only respects the dignity of human beings but actively furthers their flourishing.
The GENIUS Act Ushers Stablecoins Into the Fold, and Banks Into New Competition.
This article was originally published in Real Clear Markets.
Congress redefined the playing field for digital assets. Although the recent Senate win was on party lines, the GENIUS Act promises to bring stablecoins – blockchain-based tokens pegged to fiat currency – more squarely into the regulated financial system. This legislative framework is a constructive development for the crypto sector and the banking industry alike, introducing long-awaited regulatory clarity around stablecoins and establishing clear licensing pathways for issuers and setting ground rules that could integrate stablecoins into mainstream finance. In doing so, it also forces traditional banks to face a new reality: they will have to compete more directly for deposits in a world where digital dollars provide compelling alternatives.
For years, stablecoin issuers operated in a gray area – treated as money transmitters in some states, eyed warily by federal regulators, and with uncertainty about whether tokens like USDC or USDT might be deemed securities. The GENIUS Act seeks to end this ambiguity, creating the first comprehensive U.S. regulatory framework for stablecoins by defining them as “payment stablecoins” fully backed by safe assets and giving federal agencies like the Office of the Comptroller of the Currency (OCC) clear authority to oversee them. What’s also great is that the GENIUS Act provides an alternative to Europe’s heavy-handed approach to regulating digital assets and pushing central bank digital currencies, which displaces demand for stablecoins.
Under the Act, only approved issuers could offer stablecoins, whether as banks or as special non-bank entities that obtain a federal license. While there are quibbles about the technical details on how the licenses are given out, it carves out a legal path for stablecoin providers to operate under bank-like supervision without being banks. This oversight includes requirements like 1:1 reserve backing (in cash or Treasury bills), segregated reserves, monthly audits, and strict capital and liquidity rules. Such measures aim to bolster trust in stablecoins as a safe medium of exchange, much like deposits, but in digital bearer form.
Some critics may worry that the bifurcated licensing regime could open the door to regulatory arbitrage, though the Act attempts to mitigate this by applying strict reserve, audit, and disclosure requirements across both pathways. Crucially, the Act also specifies what counts as high-quality reserves. Stablecoin issuers would be required to hold only short-term U.S. Treasury bills or equivalent safe assets against their tokens. This not only safeguards the peg (each token truly backed by $1 in liquid assets), but also pulls stablecoins into the orbit of traditional finance. Regulated stablecoins could resemble money market funds or narrow banks with their circulating tokens functioning as a new form of dollar-denominated money, which could allow issuers to become major purchasers of Treasury bills. Circle’s USDC, for example, already keeps the bulk of its $60+ billion reserve in short-term U.S. debt.
By legitimizing stablecoins, Congress is also pushing banks to become more competitive in how they operate and provide value to borrowers. Banks have long enjoyed an advantage: sticky deposits. Businesses and consumers park trillions in checking and savings accounts that pay little or no interest, providing banks with cheap funding to make loans. But stablecoins change the equation. A regulated stablecoin gives holders a digital dollar that is instantly transferable worldwide and fully backed by interest-earning assets – effectively combining the liquidity of a checking account with the yield of a T-bill investment. While some worry that deposit flight could constrain bank lending, especially for smaller institutions, the more likely scenario is a recalibration: banks may increasingly differentiate themselves through credit expertise and integrated digital services, rather than passive liquidity capture. The rise of dollar-denominated stablecoins could also advance broader U.S. objectives by accelerating de facto dollarization in some emerging markets, complicating local monetary policy and financial stability
Regulators on the Treasury’s Borrowing Advisory Committee (TBAC) have taken notice. In their Q2 2025 report a few weeks ago, the TBAC reported that stablecoin issuers already hold more than $120 billion in U.S. Treasury bills, and that continued growth of stablecoins could generate up to $900 billion in additional T-bill demand in coming years. That represents a major new source of financing for the government, as well as a corresponding outflow of funds that might previously have sat in bank deposits. While they pointed out that surging stablecoin adoption will likely come “at the expense of bank deposits” – and banks are already seeing hints of this shift as tech-savvy customers explore digital dollars for uses, ranging from cross-border remittances to parking spare cash – the long-run effects do not need to come as a substitute.
Stablecoins need not cannibalize traditional banking, as my new paper points out with digital twins. Several large banks are already exploring stablecoins that would settle on public or permissioned blockchains while remaining inside the bank’s regulatory perimeter. A tokenized deposit lets customers tap the speed and programmability of digital dollars for payroll, trade finance, or cross-border settlement, yet keeps the relationship – and the associated lending, advisory, and treasury services – anchored at the bank. Stablecoins have the potential to be complementary: stablecoins broaden the menu of money-like instruments while banks compete on service, trust, and credit expertise instead of relying on inert, zero-yield deposits alone.
Historically, such phenomena with changes in the mechanism for financing are not new, but this is the first time technology enables private dollars to move globally and near-instantly with such ease. Rather than resist this shift, the GENIUS Act leans into it, aiming to harness the innovation while corralling the risks. If implemented, it could foster a more efficient and competitive financial system. Stablecoins, with appropriate oversight, can increase competition in deposit markets and improve the efficiency of money-like instruments – making transfers faster and cheaper, and potentially widening access to dollar-based savings in communities poorly served by banks. Banks must revisit how they attract and retain customers’ funds in an era when loyalty can be withdrawn with a few clicks to a digital wallet, and the GENIUS Act is a step in the direction towards establishing the rules of the game for digital assets.
Play the Long Game With Human-AI Collaboration
This article was originally published in Gallup News.
C-suite leaders are captivated by AI’s potential to increase productivity, from generating insights to automating routine tasks. But many studies suggest technology works best when it complements human effort, not when it replaces it.
Economic research over several decades confirms that large productivity gains happen when new tools strengthen workers’ skills, judgment and creativity within supportive organizations. In a 2002 study of U.S. firms, economists found that installing information technology had little effect on productivity unless companies simultaneously reorganized work and upskilled employees. Firms that introduced computers alongside complementary changes — like decentralizing decision-making and expanding workers’ responsibilities — saw real gains, whereas those implementing tech without such organizational innovation saw negligible benefits.
“Awesome technology alone is not enough. What you really need is to update your business processes, reskill your workforce, and sometimes even change your business model and organization in a big way,” said Erik Brynjolfsson, a professor of economics and director of the Digital Economy Lab at Stanford University.
Recent research on generative AI shows this in action. One experiment found that professionals given access to ChatGPT were 37% more productive on writing tasks, with the greatest benefits for less-experienced workers. The tool handled first drafts, freeing employees to focus on higher-value editing and idea development. Instead of replacing workers, AI expanded their abilities and narrowed skill gaps. Employees also reported greater satisfaction as tedious work led to more engaging work. Other research has replicated this result on larger samples of workers in entirely different settings.
A 2021 study using Gallup data found similar benefits for employees: Technology innovation had a positive effect on worker wellbeing, but mainly for employees who say their boss creates a trusting work environment.
In contrast, pursuing AI without equal focus on people often leads to underwhelming results and added organizational risks. Still, many companies fall into this trap. Last year, Gallup found that 93% of Gallup’s CHRO roundtable members said their organizations had started using AI, but only 15% of U.S. employees said their employer had communicated a clear plan for integrating AI into their work. What slows digital transformation may not be the tools — it may be how people feel about those leading the change.
Designing a Human-AI Future
In a seminal 2024 article led by David Autor studying the composition of work activities between 1940 and 1980, new types of jobs usually appear when technology helps people work more efficiently or when there is a high demand for certain skills. As a result, the tasks that we see today do not reflect the gamut of work next year, and leaders should treat AI as a force multiplier for human ingenuity to unlock the products of the future. So how can leaders use AI to elevate what people do best? Some successful companies offer a guide:
Invest in skills, training and habits of leveling up. Beyond specific skills, organizations benefit when employees develop persistence and resilience. Other research shows that workers in jobs requiring greater intellectual tenacity — broadly referring to grit and ambition — are better protected from major disruptions, even when accounting for skill and education levels. Leaders who model and promote continuous learning and improvement help create a workplace where technology strengthens, rather than sidelines, people.
Redesign jobs and workflows to support human-AI collaboration. Simply adding AI to current processes often results in small gains. To see meaningful improvement, organizations should rethink how work is done. Let AI take on repetitive, data-heavy tasks and allow employees to focus on what people do best: creativity, complex problem-solving, interpersonal communication and nuanced decision-making. Many banks, for instance, now use AI to process paperwork for loans so that bankers can spend more time advising clients.
Foster a culture of curiosity. Make it clear that AI is there to help employees, not reduce staff. Involve workers early by asking how AI could improve their work and the business overall. When employees help shape AI tools, they are more likely to support and use them.
The success of any AI initiative depends on the people, starting with whether employees feel that management creates a trusting work environment that is safe to experiment in. AI can help with tasks and generate insights, and sometimes even play a role in validating results, but humans have a competitive edge in connecting with customers through empathy, applying creativity to shape and use AI-driven ideas, and developing the tasks of the future. While focusing on people might seem to slow initial deployment in favor of confronting structural organizational issues at first, it sets the stage for real, lasting progress.
Let’s Make NAEP a True National Yardstick for Local Autonomy
This article was originally published in The74 with Goldy Brown.
Student outcomes in K–12 education have largely stagnated over the recent decades. Despite incremental improvements in the 1990s and early 2000s, national academic performance peaked around 2013, while progress in closing achievement gaps among subgroups stalled even earlier. Recent developments at the Institute of Education Sciences, particularly the downsizing of staff for the National Assessment of Educational Progress (NAEP), create an opportunity to rethink the role this tool can play.
In particular, the Trump Administration could explore using the NAEP to promote greater transparency among schools, parents, and local communities, as well to enhance academic rigor and ensure genuine accountability in a comparable way across schools and states. That would mean replacing a disparate collection of state tests will a single national assessment administered to every fourth and eighth grade student every year.
Parents, educators, and state leaders agree that more information — not more bureaucracy — is needed to make informed decisions for their children and communities, as well as to foster greater competition. Making the NAEP a truly national assessment would provide this information in a consistent, credible, and actionable manner.
This would require a feasible restructuring of the Institute of Education Sciences (IES) to focus on the annual creation and implementation of the NAEP, in contrast to its previous biennial schedule. Additionally, states already have the infrastructure for standardized testing, as all 50 states administer various assessments.
Some adjustments might be necessary for the reformed IES, which would need to collaborate with state offices responsible for test administration to successfully implement the NAEP on an annual basis for all eligible students, not just the current sample populations. However, there are still many advantages to this approach.
First, NAEP provides a consistent and academically rigorous measure of student performance. Many states report higher proficiency rates on their own assessments than on NAEP, creating a false sense of achievement. If all fourth and eighth grade students in states that receive federal Title I funding were required to take the NAEP annually, the discrepancy between state and national standards would become harder to ignore. States would have a stronger incentive to align their instructional practices with higher expectations.
States such as Mississippi have already shown what’s possible when NAEP results are taken seriously. Mississippi’s so-called “miracle” — its leap into the top half of state rankings in 2020 and 2022—demonstrates the value of using NAEP-aligned standards as a driver for systemic change. By contrast, allowing states to accept federal funding without comparable transparency has led to low expectations and weak accountability frameworks.
Second, expanding NAEP would provide parents with a more accurate picture of how their children are performing relative to peers nationwide. Calls for greater transparency in education — amplified during and after the pandemic — have made clear that many families want more than vague reassurances from schools. A truly national assessment would offer objective, comparable data without increasing testing burdens year after year. In its current form, NAEP tests only samples of students, providing no real insight into how individual students or schools are doing.
Third, this proposal could significantly reduce unnecessary educational costs. To receive Title I funding under the Every Student Succeeds Act, states must administer annual assessments from grades 3 through 8, a requirement that consumes substantial classroom time, financial and instructional resources.
If Congress eliminated this requirement and recommended that states administer only the NAEP in fourth and eighth grades, that could facilitate more targeted transparent evaluations and reduce assessment costs for states. Additionally, standardized tests administered from grades 3 to 8 may not be necessary for improving student outcomes. A study of test scores in Texas and Nebraska showed that, on average, a student’s test scores in their first year correlated at a rate greater than 0.90 with their next year performance.
Finally, making NAEP universal would offer a balanced form of federal oversight: less intrusive than programmatic mandates, but more informative than current reporting requirements. If decentralization is the path forward for U.S. education, it must be accompanied by a shared yardstick to assess progress. A national benchmark can support local autonomy while enabling cross-district comparisons that inform parents, educators, and policymakers alike.
Federal initiatives to improve student outcomes have historically produced mixed results. The Obama-era effort to tie teacher evaluations to student performance had little impact at the national level, though districts like Dallas and Washington, D.C., saw promising gains. These cases suggest that policy tools must be both well-designed and responsive to local implementation contexts.
Designating NAEP as the national assessment meets both criteria. It would offer the federal government a low-cost, high-impact mechanism for improving transparency and setting consistent expectations without dictating how states should teach or allocate resources —it would be left up to them.
In an era of educational fragmentation, the NAEP stands out as a uniquely credible and underutilized tool. Repurposing it as the primary national assessment — administered annually to all 4th and 8th graders in states receiving Title I dollars — would promote transparency, reduce redundant testing, and align incentives around higher academic standards. This reform would offer a shared benchmark to evaluate progress across states and districts. At a time when parents, educators, and policymakers are calling for both accountability and flexibility, a restructured NAEP provides a rare opportunity to deliver both.
Occupational licensing stifles economic growth and labor market equity
This article was originally published in the San Diego Tribune.
California consistently ranks as the most regulated state in the country—getting so bad, in fact, that comedian Bill Maher recently told Gov. Gavin Newsom that he needs to “take a chainsaw” to California’s overgrown regulatory code. And while many Californians might worry about environmental, social, or safety consequences of deregulation, there is at least one category of state regulations that, with a few easily identifiable exceptions, serve only to protect special interests at the expense of the low- and middle-class wage-earners: occupational licensing regulations.
Not long ago, only a small fraction of American workers needed a license to do their jobs. Today, nearly a quarter of the workforce is subject to legally mandated licensing—everything from hair stylists to plumbers to travel guides. That explosion of regulations hasn’t just inconvenienced would-be professionals; our new research published in Humanities and Social Sciences Communications shows it’s impeding employment, especially for low- and middle-wage earners.
In a new study, we collected and analyzed data from every state to measure how occupational licensing restrictions have changed over the last few years. We used machine learning to track the exact language of state regulations—terms like “shall,” “must,” and “required”—to see where and how they apply to specific occupations. We found that occupational restrictions have nearly tripled since 2019, especially in more Democrat states, and disproportionately target lower-wage fields. In other words, the occupations that already offer relatively modest pay have seen the largest increase in barriers to entry.
To be fair, these restrictions do raise wages for those who manage to get the license—a result consistent with prior work. At first glance, that might sound like a boon to workers. But it’s also clear that when licensing rules pile up, there are fewer jobs to go around. In our study, a 10% rise in these regulatory barriers caused a 2% increase in hourly wages but a 4% drop in employment in those occupations. That’s not just a theoretical loss. It translates into fewer options for workers—particularly those hoping to enter a new field or move to another state in search of higher pay.
Because most occupational regulation comes from state and local governments, federal attempts at reform face an uphill battle. The Obama White House highlighted the risk licensing requirements pose to mobility and competition, and the Trump administration famously enacted a “1-in, 2-out” (and more recently, a “1-in, 10-out”) rule for federal regulations. But although the Trump Administration could use the bully pulpit, most of the power to reduce occupational licensing burden resides in state capitals.
Yet there’s a silver lining: Our analysis shows that even partial reforms, such as have been enacted in Idaho and Virginia, can boost hiring. If policymakers are unable to remove licensing outright—sometimes for political concerns, sometimes because of valid quality or safety concerns—they can still streamline burdensome procedures that are only tangentially related to actual skill or public safety. Lowering license fees, removing residency requirements, or granting reciprocity for out-of-state licenses are all examples of steps that reduce red tape without sacrificing meaningful standards.
Why does this matter so much for the middle class? Because rigid regulations punish those who can least afford the time and money to fulfill extra mandates. Licensing can mean months or years of courses, registration fees, and exams. For middle-income earners—or entry-level workers looking to climb the ladder—this extra friction can be insurmountable. That’s especially problematic when so many Americans need to adapt to shifts in technology and the economy.
Consider a plumber’s apprentice in Illinois or a young cosmetologist in California. With regulations layered on top of an already challenging training process, people who don’t have the cushion of savings might easily be forced to drop out of the pipeline. And if they do persevere, they’re often locked into the state where they obtained that license—relocating means requalifying, possibly at enormous personal cost. That dampens labor mobility, one of the key drivers of regional economic growth.
This is why state legislatures, especially in the heavily regulated state of California, should prioritize thorough licensing reviews. They can convene regular sunset committees to weed out outdated provisions, encourage reciprocity across state lines, and ensure that the rules are transparently linked to safety or competency concerns—not simply designed to protect entrenched interests. Such reforms would open new job opportunities and, in the long run, reduce a hidden tax on families trying to pursue better-paying work.
When the vast majority of new restrictions concentrate in occupations with modest wages, the net result is clear: Fewer hires, reduced geographic flexibility, and a steeper climb to the middle class. Shedding this regulatory baggage should be a political no-brainer—reforms could unite lawmakers who share a desire for economic growth and more equitable labor markets.
As mentioned before, occupational licensing has a place in some jobs. If you’re seeing a brain surgeon or hiring a structural engineer, there’s obviously a strong case for proven certification. But somewhere along the way, we stretched that logic to more routine jobs, imposing license burdens that can’t be justified on health or safety grounds. Rolling back these requirements would let more Americans seize opportunity—winning a much-needed victory for the low and middle classes. And between the high cost of living, high state income taxes, and the highest level of regulations in the nation, the low and middle classes in California could use any help they could get.
Christos A. Makridis is an associate research professor at Arizona State University, digital fellow at Stanford University, and a visiting faculty at University of Nicosia. Patrick A. McLaughlin is a Research Fellow at the Hoover Institution at Stanford University and a Visiting Research Fellow at the Pacific Legal Foundation. Follow him on X: @econpatrick.
America must harness stablecoins to future-proof the dollar
This article was originally published in Fortune Magazine.
With Congress just passing the federal budget, lawmakers will have an opportunity to tackle long-term financial challenges outside of crisis mode. One such challenge—and opportunity—is the rise of stablecoins: privately issued digital tokens pegged to fiat currencies like the U.S. dollar. Stablecoins have rapidly grown into a hundreds-of-billions market, facilitating billions in transactions, but they’ve lacked a comprehensive U.S. regulatory framework. Fortunately, Washington is signaling new openness to digital assets—evidenced by President Trump announcing the establishment of a strategic digital asset reserve for the nation. Creating the requisite clarity will unlock a new era of competition and innovation among banks.
Stablecoins are a strategic extension of U.S. monetary influence. Around 99% of stablecoin volume today is tied to the U.S. dollar, exporting dollar utility onto international, decentralized blockchain networks. A stablecoin market with the right guardrails can strengthen the U.S. dollar’s dominance in global finance. If people around the world can easily hold and transact in tokenized dollars, the dollar remains the go-to currency even in a digitizing economy. Recent congressional hearings echo this point—up to $5 trillion in assets could move into stablecoins and digital money by 2030, up from roughly $200 billion now. If the U.S. fails to act, it risks “becoming the rust belt of the financial industry,” as one fintech CEO warned.
Other jurisdictions aren’t standing still: Europe, the U.K., Japan, Singapore, and the UAE are developing stablecoin frameworks. Some of these could even allow new dollar-pegged tokens issued offshore—potentially eroding U.S. oversight. In short, America must lead on stablecoins or get pressured by Europe’s Digital Euro and other central bank digital currencies (CBDCs) that threaten both the private banking ecosystem and individual sovereignty in their strictest form. My research, for example, shows that CBDCs to date have not had any positive effects on growing GDP or reducing inflation, but have had negative effects on individuals’ financial well-being.
Ideally, various regulated institutions—banks, trust companies, fintech startups—could issue “tokenized dollars” under a common set of rules. Before the 1900s, state governments had the primary authority over banking. While that led to fragmentation and problems, with the right federal architecture, blockchain allows banks to offer differentiated products and a version of what existed pre-1900—their own type of stablecoin that differs in security, yield, and/or other amenities—while still keeping the value pegged to the dollar. More broadly, there is a large body of academic research showing how stablecoins drive down transaction costs, speed up settlement times, and broaden financial inclusion through new services.
In absence of federal action, we risk a patchwork of state-by-state rules or even de facto regulation by enforcement, which creates uncertainty for entrepreneurs and consumers alike. The Stablecoin Tethering and Bank Licensing Enforcement (STABLE) Act was introduced in the House in 2020, requiring any company issuing a stablecoin to obtain a bank charter and abide by bank regulations, including approval from the Federal Reserve and FDIC before launching a stablecoin, and to hold FDIC insurance or Federal Reserve deposits as reserves, making stablecoin issuers regulated like banks to protect consumers and the monetary system.
Preventing government overreach
However, as House Financial Services Committee Chairman French Hill has said, the goal should be to modernize payments and promote financial access without government overreach. Notably, Hill contrasted private-sector stablecoin innovation with the alternative “competing vision” of a government-run digital dollar (central bank digital currency) that could crowd out private innovation. And, the STABLE Act could be too draconian, penalizing non-bank entities. To that end, the recent bipartisan effort in the Senate—the Guiding and Establishing National Innovation for U.S. Stablecoins Act of 2025 (GENIUS Act)—has gained momentum.
In practice, the GENUIS Act could allow a regulated fintech or trust company to issue a dollar stablecoin under state supervision, so long as it complies with stringent requirements mirroring federal bank-like rules on liquidity and risk. This kind of flexibility, paired with robust standards, can prevent market fragmentation by bringing all credible stablecoin issuers under a regulatory “big tent.” It would also prevent any single point of failure: If one issuer falters, others operating under the same framework can pick up the slack, keeping the system stable.
Critics often voice concerns that digital currencies could enable illicit activity. But in reality, blockchain technology offers more transparency, not less, when properly leveraged. Every transaction on a public blockchain is recorded on an immutable ledger. Law enforcement has successfully traced and busted criminal networks by following the on-chain trail—something much harder to do with cash stuffed in duffel bags. In fact, blockchain’s decentralized ledger offers the potential for even greater transparency, security, and efficiency.
Following the momentum from the White House, Congress has a running start on crafting rules that bring stability and clarity to this market now that the budget has passed. Lawmakers should refine and pass a comprehensive stablecoin bill that incorporates the best of both approaches—the prudential rigor of the bank-centric model and the innovation-friendly flexibility of a dual license system. Done right, stablecoin legislation will reinforce the dollar’s role as the bedrock of global finance in the digital age, unlock new fintech innovation and competition domestically, and enhance financial integrity.
Changing Compensation Calls for Updated Social Contract
This article was originally published in Real Clear Markets.
Changing Compensation Calls for Updated Social Contract
March 13, 2025
American workers are witnessing a profound shift in how they are compensated. A century ago, a job’s pay was almost entirely a paycheck, but now nearly a third comes as benefits like health insurance, retirement plans, and stock options. Moreover, the growth in benefits is concentrated among wealthier workers, leaving the average American behind in an era of rapid technological change, according to a recent working paper I co-authored with Adam Bloomfield and Travis Cyronek.
The policy focus has recently shifted towards the American worker. “The American Dream is rooted in the concept that any citizen can achieve prosperity, upward mobility, and economic security. For too long, the designers of multilateral trade deals have lost sight of this,” said Treasury Secretary Scott Bessent in a recent talk to the Economic Club of New York. The Trump Administration has pointed out that the average American worker has borne the bulk of the burden, consistent with a large body of empirical evidence on globalization and trade. That is not to say there have not been benefits from low prices, but we need to acknowledge the costs.
Our recent working paper points out that the burden on the American worker may be even more severe than previously thought: while we often talk about wages, total compensation – which also includes benefits – tells an even tougher story for the average worker.
At the turn of the 20th century, benefits made up virtually none of a worker’s compensation, but by the late 1990s, over a quarter of a typical worker’s compensation came from non-wage benefits. Much of this transformation occurred in the post-World War II era: employer-funded “fringe” benefits soared from about 7% of worker compensation in 1947 to 18% by 1979, and now hover around a third of total compensation. While valuable to some workers, expensive benefits often go unused and can risk making workers feel locked into their jobs, while others lack even basic benefits.
On the one hand, benefits like health coverage and retirement contributions provide security and long-term value. On the other hand, many workers would prefer or urgently need higher wages instead of benefits they can’t readily spend. But our new paper shows that both wage and benefits growth for middle and low-income workers has lagged behind productivity.
Moreover, millions of low-wage workers get few or no benefits from their jobs – no health plan, no retirement account, no paid time off. As a result, the gap in overall compensation (wages + benefits) between high- and low-paid workers is even wider than the wage gap alone. A cashier or care aide might earn a bare minimum wage with no health coverage, while a higher-paid manager not only earns more per hour, but also gets thousands of dollars in insurance and pension contributions. In other words, the people who can least afford out-of-pocket medical costs are the least likely to get health coverage through their jobs.
To address the challenges posed by benefit-heavy compensation structures, we need to find ways of decoupling basic benefits from a single employer. The expansion of generative artificial intelligence has likely spurred greater self-employment, so now more than ever we need to think through ways of adapting labor market institutions to promote healthy growth.
We also need to consider how to incentivize employers extending benefits to part-time and low-wage employees. This could involve tax credits for small businesses that provide health insurance or retirement plans to lower-paid staff, or penalties for large companies that leave most workers uncovered. Or, it could involve expanding employee stock ownership plans (ESOPs) that allow employees to reap the profits of the firm so that they can make their own choice on what benefits to purchase on the open market.
The changing nature of compensation in America – from straight wages toward benefit-heavy packages – calls for an updated social contract. Without intervention, the benefits revolution will continue to bypass millions of workers, accelerating income inequality and social fragmentation. By modernizing policies to reflect how people are paid today, we can protect the dignity of work and strengthen the American workforce across all income levels.
Trump’s crypto reserve is being panned by crypto leaders. Here’s why it’s actually a good idea
This article was originally published in Fortune Magazine.
The recent announcement by the United States to establish a strategic crypto reserve, featuring Bitcoin, Ethereum, XRP, Solana (SOL), and Cardano (ADA) is a major milestone for national security and economic policy. By integrating these digital assets into a formal reserve, the U.S. not only fortifies its national security posture, but also strategically supports and leads the growth of the private digital asset market worldwide.
The announcement received criticism from some crypto leaders, such as Coinbase CEO Brian Armstrong, who had pushed for only including Bitcoin, and 8VC general partner Joe Lonsdale, a Trump supporter who argued the government should stay out of crypto. Some have also suggested that there was insider trading, but these accusations have been speculation so far. Do not forget that there is vast insider trading outside of crypto—so much that there’s even an app called Autopilot that allows retail users to replicate the trades of politicians.
We’ll get to the advantages of having a strategic reserve, but let’s pause on whether, if we have a reserve, it should just be Bitcoin. Armstrong is a laudable leader, and he makes an important point—that we should focus on Bitcoin because of its relative stability and strength. But blockchain is so much more than just Bitcoin. Other tokens have not been around as long, and thus their price volatility is greater, but that doesn’t make them any less strategic.
In fact, the newer generation of tokens often have more sophisticated consensus mechanisms and utility that they offer users, such as ETH supporting decentralized apps and XRP supporting cross-border transactions at scale. We cannot dismiss these because BTC was “first.”
Crypto reserve benefits
Let’s explore the upside of a strategic crypto reserve.
First, the establishment of a crypto reserve provides a hedge against escalating geopolitical risks. Historically, U.S. economic power has relied heavily on the dominance of the dollar, but this dominance has faced challenges—especially lately—from geopolitical rivals seeking alternative financial channels to circumvent U.S.-led financial systems and sanctions. By holding digital assets, the U.S. expands its bargaining power beyond traditional fiat currency, providing an alternative layer of economic leverage. In times of tension or uncertainty, digital assets offer resilience against targeted economic disruptions, sanctions, and currency manipulation.
Moreover, each of the chosen digital assets brings distinct strategic advantages that enhance national security infrastructure. For example, XRP is renowned for its capability to execute rapid cross-border transactions with exceptional speed and minimal transaction costs. Such capabilities are integral during times of crisis requiring immediate international monetary settlements or aid distribution. Similarly, Solana’s high-performance blockchain provides robust support for scalable and secure applications such as secure communications infrastructure or real-time monitoring of critical national assets. Cardano, known for its serious approach to governance, transparency, and security, offers additional prospects for stability and reliability.
But second, here’s a fact that might be overlooked: The formation of this crypto reserve also carries profound implications for private digital asset markets. The recent federal endorsement will serve as a powerful catalyst for market confidence and institutional adoption. Although support for digital assets has already been growing, institutional investors and major financial institutions have still hesitated to engage fully with cryptocurrencies due to regulatory uncertainty and concerns over legitimacy. The launch of an official U.S. crypto reserve sends a powerful signal: These digital assets are not only legitimate, but also strategically valuable.
The strategic crypto reserve contrasts sharply with the alternative scenario of implementing a Central Bank Digital Currency (CBDC) where digital asset management is entirely centralized under government control. Unlike a CBDC, which could displace private banks and the market for stablecoins by monopolizing digital asset flows and potentially stifle innovation through excessive centralization, the strategic crypto reserve enables the government to collaborate alongside private entities, fostering a balanced, vibrant digital asset ecosystem. Other research has also found using cross-country data that CBDCs do little to help reduce inflation or productivity, but rather reduce financial well-being, particularly among vulnerable populations.
Crypto confidence
This alternative approach will help support the growth of the private market for digital assets. In particular, startups, as well as incumbent financial institutions, can more confidently invest in infrastructure, talent acquisition, and research initiatives knowing they have clear governmental alignment. Clear governmental participation in digital asset markets can streamline regulatory processes, ensuring private entities can innovate securely within well-defined legal boundaries. Countering malicious influences in crypto means bringing more transparency to the market.
The U.S. strategic crypto reserve is a sophisticated approach that addresses both geopolitical vulnerabilities and economic innovation simultaneously. By diversifying its strategic reserves into digital asset holdings, the nation strengthens its national security by broadening economic leverage and creating an alternative financial buffer against external interference. Federal involvement also helps legitimize and invigorate the private digital asset sector, creating conditions for exponential market growth and innovation. While any action necessarily creates new risks, these can and should be managed, but we shouldn’t overlook the potential upsides.
Embracing FinTech: How CFPB Can Unlock the Future of Earned Wage Access
This article was originally published in Real Clear Markets.
The Consumer Financial Protection Bureau (CFPB) has occupied many headlines lately, but the change in leadership largely reflects a different approach to consumer empowerment than a departure in priorities. Among the many ways that the Trump Administration can improve on the status quo is the treatment of earned wage access (EWA) products by the Consumer Financial Protection Bureau (CFPB). EWA products allow employees to access a portion of their earned wages before payday, often for a small fee or free. The cost to employees is significantly lower than other options, including payday loans that often carry annual percentage rates (APRs) exceeding 300%. EWA fees typically range from $1 to $5 per transaction or are covered through alternative funding mechanisms like merchant interchange fees.
EWA providers do not charge interest, require collateral, or impose penalties for non-repayment. More importantly, because EWA draws on wages already earned, it does not create new debt obligations for workers. Some providers integrate directly with payroll systems, ensuring that any advance is automatically deducted from the employee’s next paycheck, eliminating default risk. This structure allows EWA fees to remain lower than traditional short-term credit options while offering a more transparent alternative to overdraft fees and high-cost lending.
Companies already serving consumer financial needs are well-positioned to expand into this space. Chime’s MyPay, for instance, enables consumers to access wages on their own schedule without hidden costs by connecting directly to payroll systems and leveraging merchant-funded models. Instead of employers taking the easy way out by pushing costs onto workers (i.e., “paying for their pay”), they can explore partnerships with FinTech providers and challenger banks to drive innovation in benefits delivery. This shift could not only lower costs, but also increase financial stability for employees who currently live paycheck to paycheck.
However, previous CFPB leadership made such FinTech partnerships tougher by classifying EWA programs as a type of consumer loan. That categorization imposed costly regulatory requirements under the Truth in Lending Act (TILA), treating EWA advances as if they were traditional credit products. TILA mandates extensive disclosures, compliance costs, and risk assessments that are unnecessary for a product that simply provides early access to wages. This regulatory burden raises the cost of providing EWA, forcing providers to either pass higher costs onto employees or exit the market altogether, reducing financial flexibility for workers.
With a new CFPB director expected to take a fresh look at these regulations, the opportunity exists to rethink the treatment of EWA in a way that balances consumer protection with financial innovation. There is no doubt that we need some regulations to set guardrails for markets, but the overarching concern is that we have witnessed a proliferation of regulations that do little to advance consumer safety, but instead generate unintended consequences, as my work with Alberto Rossi in 2020 has shown. Policymakers should focus on ensuring transparency and cost efficiency and allowing EWA providers to build models that eliminate fees for employees.
One such model leverages merchant interchange fees and employer partnerships to fund EWA services. When employees access their wages through an EWA-linked card, merchants pay a small fee—typically around 1%—which can be reinvested into funding wage advances. This creates a sustainable revenue stream without burdening workers with direct fees. Some fintech firms, like Chime’s MyPay, have already adopted this approach, offering free EWA services by integrating directly with payroll providers and employer benefits programs.
For employers, EWA programs also offer cost savings. Running payroll more frequently is expensive and administratively complex, and EWA provides a way to give employees financial flexibility without increasing payroll cycles. In turn, this reduces reliance on predatory payday lenders, which research has linked to higher bankruptcy rates among low-income workers.
While former director Rohit Chopra’s tenure at the CFPB has ended, the broader goal of improving financial access for workers remains. Regulatory compliance for the sake of it is empty, but fostering a financial ecosystem where innovation lowers costs for workers and expands economic opportunity should be the priority. Reclassifying EWA as something other than a loan is a first step in that process, but it reflects many more opportunities to modernize financial regulations in ways that enhance worker financial stability without stifling innovation.
The secret weapon to fixing our broken immigration system is right in front of us
This article was originally published in Fox News (with Corey DeAngelis).
Twitter/X CEO Elon Musk and entrepreneur Vivek Ramaswamy sparked a debate in December when they advocated allowing more legal immigration for high-skilled workers – for example, through H-1B visas – to make America more competitive. President-elect Donald J. Trump endorsed the policy in a statement to the New York Post shortly after the dispute broke out.
Conservatives on both sides of this discussion should be able to agree on one thing: we would not need to import as much talent if we had a more effective education system.
The latest data from the National Assessment of Educational Progress, also known as the "nation’s report card," shows that fewer than one-in-four eighth grade students are proficient in math and less than a third of them are proficient in reading. The latest international assessment shows that we’re ranked 24th in math – in the middle of the pack – despite spending nearly $20,000 per public school student each year, more than just about any other country in the world.
U.S. 4th grade math scores have fallen 18 points since 2019 – a decline larger than all but three countries: Azerbaijan, Iran, and Kazakhstan.
We can start fixing the education crisis by improving the efficiency of educational resource allocation. Mountains of empirical evidence in economics research indicate that misallocation is one of the greatest impediments to economic growth for a nation, as well as the educational services sub-sector. To that end, improving the efficiency of public education can go a long way in producing multiplier effects for a nation as a whole.
Trump appointed both Musk and Ramaswamy to head the newly formed Department of Government Efficiency (DOGE) in November. In his statement announcing DOGE’s new leaders, Trump said his administration will "dismantle Government Bureaucracy, slash excess regulations, cut wasteful expenditures, and restructure Federal Agencies."
It’s no secret waste runs rampant through our public school system. The U.S. spends over $900 billion per year on education for lackluster results. The current system is not serving the students, and makes teachers’ lives more difficult, so now is the time to start thinking about how to get bigger bang for our buck in the Department of Education. We need to inventory where current resources are going, and what outcomes they’re driving – plain and simple.
But tackling this apparent low-hanging fruit can only do so much to cut waste. After all, about 90% of all public-school funding comes from state and local sources, not the federal government.
That’s why we have to understand the root cause behind the deteriorating student outcomes. A major potential factor is administrative bloat in American education. The latest data from the National Center for Education Statistics show that student enrollment has only increased by about 5% since 2000, but the number of teachers employed by the system has grown twice as fast as students, by about 10%, over the same period. School district administrative staff has increased by about 95%, or 19 times the rate of student enrollment growth.
We’ve increased inflation-adjusted spending per student by more than 160% since 1970 and the teachers aren’t seeing the money. Teacher salaries have only increased by 3% in real terms over the same period.
The problem is that the public school system operates as a monopoly with weaker incentives to spend money wisely. But public-school unions do have a strong incentive to advocate for hiring more people, particularly in states that do not have right-to-work laws. Additional staffing means more dues-paying members and a larger voting bloc.
Our just-released study provides the first evidence that unions are driving administrative bloat in education. Using data from the National Center for Education Statistics and the American Community Survey between 2006 and 2024, we find a robust positive relationship between union density and staff-to-student ratios, and negative effects of right-to-work laws (RTW) on these ratios. These effects are largely driven by the expansion of administrative and support roles rather than teachers. Furthermore, these effects are concentrated in non-RTW states.
Specifically, we find that a 10-point increase in teachers union density is associated with a one-point increase in year-to-year staffing growth.
In Chicago, a union stronghold, staffing has increased by a whopping 20% since 2019 even though student enrollment has plunged 10%. In Texas, one of six states that outlaws collective bargaining for public employees, staffing has increased by 8% – much closer to their 2% growth in student enrollment – over the same period. Our results in the study show that these examples are not anecdotal – it’s been happening at scale.
Injecting competition into the K-12 education system would put pressure on school districts to redirect otherwise wasteful spending into the classroom. Trump can help make this happen by getting congressional Republicans in-line to pass school choice. The Educational Choice for Children Act already passed out of the House Ways and Means Committee last September and President-elect Trump said he would sign it.
Improving the efficiency of government should be a non-partisan issue, especially in a sector that hits so close to home for every American – education. It’s now up to Congress to deliver for the parents who put them in office. Allowing parents to direct the upbringing of their children is the right thing to do, but it will also make America more competitive and make education great again.
Making crypto mainstream requires greater efforts to stop fraud
This article was originally published in Cointelegraph.
We find it easy to talk about the benefits of the digital economy, whether the internet or digital assets, but the costs are often overlooked. Whether the surge in human trafficking that has emerged on social media platforms or the rise of cybersecurity vulnerabilities, the expansion of the digital economy comes with new risks to manage.
The digital asset community is no different and, to scale and become sustainable, it must confront the prevalence of fraud. And, it’s not hard: already distributed ledger technologies are demonstrating their value by solving concrete use-cases. This week in Vienna, Austria, the Austrian National Bank — together with the Complexity Science Hub and other sponsors — are hosting a conference on advances in financial technology, with a wide array of presenters who have researched value-enhancing uses of blockchain technology.
Thanks to pioneering work by the Federal Trade Commission’s Consumer Sentinel, we now have basic statistics on the incidence of fraud, the perpetrators, and the countries that exhibit the greatest violations. Using these data on complaints, Michel Grosz and Devesh Raval from the FTC show that it is possible to identify countries with excess levels of fraud based on their level of exports and to whom they are exporting. We need this caliber of data and the processes to support its collection to make strides in countering fraud.
Unfortunately, crypto does not have a great reputation on this frontier. The FTC released showing $114 million in reported fraud from Bitcoin ATMs (BTMs) in 2023 — and the number of crypto scams has surged in recent years. Of course, we need to view these statistics in perspective: fiat currencies remain the currency of choice for fraud across the world, so we should not compare the worst of crypto with the best of fiat – it’s not an apples to apples comparison. Nevertheless, we should still strive to establish the right incentives and processes within the digital asset ecosystem to counter fraud wherever possible.
Fortunately, there are already a wave of blockchain use-cases that are countering fraudulent activity. Consider, for instance, the role of financial auditing that helps ensure the integrity and transparency of organizations. Currently, auditors lack the ability to cross-check transactions between different organizations, a limitation that could lead to misreporting scandals involving millions of dollars and leads many crypto audits to be more for the show. To address this, new protocols leveraging blockchain, such as Cross Ledger cOnsistency with Smart Contracts (CLOSC) and Cross Ledger cOnsistency with Linear Combinations (CLOLC), are emerging that will enable auditors to verify cross-ledger transactions more efficiently with built-in privacy and security properties, such as transaction amount privacy and organization-auditor unlinkability.
Similarly, take scalability as another example, which is recognized as necessary for institutional adoption. Layer-2 (L2) solutions such as rollups help solve the scalability problem of L1s by handling transactions off the main blockchain and then posting the results back. However, a big concern is ensuring the security of these rollups, especially making sure that the data posted is accurate.
One recent study proposed a "watchtower" system where independent actors (watchtowers) are rewarded for keeping an eye on transactions and raising alarms when something seems wrong. These watchtowers are required to prove that they’ve been diligent in their work through a system called "proof of diligence," which ensures they’ve monitored the transactions properly. They can also challenge false data, and if they catch errors, they earn rewards. A key part of the solution is not just the technology, but also the economics of designing adequate incentives to prevent wrongdoing and promote trust.
Value-enhancing examples abound in the blockchain ecosystem, as the AFT conference in Vienna will showcase, but we need to do a better job of quantifying the benefits of real use-cases and amplifying the integral role that they play in enabling economic and social activity. Indeed, one of the greatest use-cases of blockchain technologies, drawing on its roots from cryptography, is the ability to improve security and counter malicious actors. But we need to get more serious in the way we talk about and pitch blockchain as a solution.
Christos Makridis is a guest columnist for Cointelegraph, an associate research professor at Arizona State University, an adjunct associate professor at University of Nicosia and the founder/CEO of Dainamic Banking. He holds doctoral degrees in economics and management science and engineering from Stanford University.
If your country has adopted a CBDC, you might be suffering
This article was originally published in Cointelegraph.
We’re often told that central bank digital currencies (CBDCs) will promote "financial inclusion" and help people around the globe. However, preliminary research results indicate the opposite could be true: Where CBDCs have been adopted, well-being has declined in recent years — particularly among young people and those with low incomes.
My new research paper provides the first comprehensive evaluation of their early effects on macroeconomic indicators and subjective well-being, utilizing cross-country data between 2019 and 2023. The results suggest that the benefits may be more limited than initially anticipated, coupled with potential negative effects on individual well-being and financial stability.
Limited economic benefits and unintended consequences
Data from the World Bank indicates — contrary to what you may think — higher-income countries are more likely to pilot or launch CBDCs, with these countries having, on average, five percentage points higher per capita GDP. While these countries also tend to have larger populations — largely driven by China and India — there are no significant differences in net migration rates, male unemployment rates, or urban populations.
Despite the enthusiasm surrounding CBDCs, the analysis suggested that their impact on key economic indicators — such as GDP growth and inflation — has been minimal. The study's statistical models compared countries that either piloted or launched CBDCs between 2019-23.
Recognizing that countries that pilot or launch CBDCs may be systematically different from their counterparts, I also created a "synthetic control" group that matched countries with CBDCs with others based on a nonlinear combination of controls. In other words, while there was no single control country, a combination of characteristics over each country allowed for the construction of a "synthetic control." Where possible, data was used to find how measurements within countries had changed after CBDC adoption.
The study found no evidence that CBDCs correlated with greater GDP per capita or lower inflation. These findings challenge the prevailing narrative that CBDCs are a panacea for economic challenges, particularly in low- and middle-income countries.
However, macroeconomic indicators only go so far, especially in developing countries where the data might be less reliable. Gallup and its World Poll — which is the leading source of data for constructing measures of subjective well-being across countries over time — provided the data for two additional outcomes of interest: whether an individual was thriving and their financial well-being. The former is measured based on responses to questions relating to a self-assessed ranking of current life satisfaction and expected (over the next five years) life satisfaction both on a 0-10 scale. Financial well-being is measured in response to several self-assessed questions about the ease of paying the bills and financial anxiety.
Related: How will CBDCs be used for political oppression in your country?
Gallup's data indicated that CBDCs negatively correlated with both the probability an individual was thriving and their financial well-being — a result that was concentrated among younger, lower-income populations. These groups, who are often the target audience for financial inclusion initiatives, report feeling less financially secure.
After estimating these statistical models relating well-being with CBDC adoption, country controls, and individual demographics, the data identified where the declines in well-being have been the greatest. The CBDC-interested countries with the largest declines between 2020-23 — in terms of respondents who were "thriving," according to the Gallup World Poll — were South Africa, Sweden, Thailand, and South Korea. (Sweden and South Korea have announced pilot CBDC programs, while South Africa and Thailand started developing their CBDCs in the first quarter of 2024.)
The importance of design and regulation
One of the critical challenges facing central banks is designing CBDCs that maximize benefits while minimizing risks. The risks associated with CBDCs are not trivial. They include potential financial instability through the disintermediation of banks, the erosion of privacy, and the concentration of financial power, which I’ve written about in Cointelegraph before. These risks are particularly pronounced if the central bank directly manages all aspects of the CBDC, which could undermine the traditional role of commercial banks and reduce the availability of credit, as Jesús Fernández-Villaverde and his coauthors showed in a 2021 paper.
Hybrid CBDC models could reduce some of these risks by allowing private-sector intermediaries to interact with customers while a central bank oversees the system, preserving a role for commercial banks and ensuring that CBDCs complement rather than disrupt existing financial systems. Additionally, implementing strong privacy protections and limiting the centralization of power are essential to prevent the potential misuse of CBDCs. That is in stark contrast to the way that some countries have implemented CBDCs, particularly China. However, further work is needed to assess how the architecture of the CBDC affects both economic and social outcomes — not just in theory, but very concretely.
Remote work does make more time for work-life balance. Here’s the data
This article was originally published in Fast Company.
There’s no debate that remote work is here to stay, but the question is what shape it takes and how different types of workers are changing the amount of time they work. My new research shows that remote workers have reduced their time at work and increased their time in leisure substantially between 2019 to 2023, and these trends continued into 2023.
How to measure time spent working
To explore the effects of this shift, I used data from the American Time Use Survey (ATUS), restricted to employed, full-time respondents aged 25 to 65, providing a comprehensive measurement of time allocated to various activities with minimal measurement error.
This data set offers more reliable insights than other labor supply surveys, such as the Current Population Survey or the American Community Survey, due to reduced recall bias. Furthermore, the analysis focused on non-self-employed workers, as their occupational classification is clearer.
One of the major benefits of the ATUS is that it measures a wide array of activities, not just time at work, like many existing surveys, allowing me to track time across work, leisure, household chores, childcare, and more. Leisure is defined as time spent socializing, passive and active leisure, volunteering, pet care, and gardening.
Subscribe to the Compass Newsletter.Fast Company's trending stories delivered to you daily
SIGN UP
|
Since the ATUS collects detailed 24-hour time diaries in which respondents report all the activities from the previous day in time intervals, the records are also more reliable than standard measures of time allocated to work available in other federal data sets that require respondents to recall how much they worked over the previous year or week. These diaries contain much less noise than typical survey results.
To measure remote work, I used the “remotability” index from professors Jonathan Dingel and Brent Neiman’s 2020 paper in the Journal of Public Economics, which is based on the Department of Labor’s O*NET task-level data on how many tasks in an occupation can be done remotely.
Updated 2023 time use patterns
My recently updated research compares how people have changed their time allocation between 2019 and 2023 when they work in remote jobs versus hybrid or on-site.
While remote workers tend to spend 28 minutes more per day than their non-remote counterparts in 2019, remote workers drastically reduced their time at work over the pandemic: 32 minutes per day less in 2020, 41 minutes less in 2021, 57 minutes less in 2022, and 35 minutes less in 2023. Conversely, workers in more remote jobs allocated more time toward leisure over these years.
Do these changes in time use simply reflect differences in the type of worker within remote jobs over time? All my results control for demographic factors, such as age and education, hourly wages, and differences across industries and occupations. I also studied changes in the composition of more versus less remote jobs over these years, finding minimal differences.
Interestingly, the time spent on home production (household chores, caring for household members) and shopping did not increase significantly. What did, however, is a slight increase in other activities not otherwise classified. Overall, remote workers are not merely reallocating their work time to household activities but are genuinely engaging in more leisure.
Labor and leisure changes varied across different demographic groups, and these trends diverged in some cases in 2023. In particular, men reduced their time at work by 33 minutes per day in 2021 and 58 minutes per day in 2022, relative to 2019, but only 13 minutes per day in 2023.
In contrast, women reduced their time at work over these years, but that intensified in 2023 when they reduced their time at work even more by 76 minutes per day, relative to 2019. Singles and those without children also showed a steeper decline in labor and an increase in leisure activities, though the effects were less significant in 2023.
Crucially, these declines in time at work are not driven by the role of commuting. While commute times have declined overall, the bulk of the decline in time at work has been driven by actual time working—not commuting.
These results provide insights into the ongoing debate about quiet quitting, where employees may be dissatisfied with their jobs and reduce their efforts. While the data does not directly measure job satisfaction, ATUS includes some information on subjective well-being. Remote workers reported slightly higher life satisfaction and felt more rested than their in-office counterparts in 2021, suggesting they are not necessarily quiet quitting. Instead, they might be reallocating their time to activities they prefer, enabled by the flexibility of remote work.
Understanding why
Why such a large change in how much men and women work in 2023? Women who typically work in remote intensive jobs work 46 minutes per day more than their less remote counterparts, consistent with my writing last year that did not yet include 2023 data. However, the decline in time at work accelerated more in 2023 for women in remote jobs.
One possibility is that more women have reported burnout than men, although they decreased their time at work in 2022. Another possibility is that there has been some switching back to in-person work, which may be harder to do for some women who have more caregiving responsibilities, particularly in light of prior research of mine with Chris Herbst and Ali Umair documenting the significant effect that state regulations had on the availability of childcare during the pandemic. In this sense, men in more remote intensive jobs may have returned to work more in 2023 with their hybrid jobs, whereas women did not.
I find that time spent in childcare for women with children grows by 28 minutes per day in 2022, but only 15 minutes per day in 2023, relative to 2019. While the same patterns do not exist for men, and are part of the explanation, it is not the full story.
Could preferences over in-person versus remote work explain the differences in time use? In companion work, I found that men dislike full working-from-home arrangements, whereas I do not see the same for women.
The data also suggests a preference toward remote work among women, so if these jobs have fewer opportunities and/or requirements around work, then the gradual return to office would result in fewer hours.
Coupled with all of these patterns are findings from Gallup surveys, which show us that only 33% of employees are engaged, so burnout—or low engagement— may be a very real phenomenon.
What are the consequences on productivity? Time will tell, but the positive effects of greater flexibility may have offset the negative effect of lower labor. However, much more work, including understanding differences in time use among all genders, still has to be done.
Moving Slow and Fixing Things
This article was originally published on Lawfare with Iain Nash, Scott J. Shackelford, Hannibal Travis.
Silicon Valley, and the U.S. tech sector more broadly, has changed the world in part by embracing a “move fast and break things” mentality that Mark Zuckerberg popularized but that pervaded the industry long before he founded FaceMash in his Harvard dorm room. Consider that Microsoft introduced “Patch Tuesday” in 2003, which began a monthly process of updating buggy code that has continued for more than 20 years.
While it is true that the tech sector has attempted to break with such a reactive and flippant response to security concerns, cyberattacks continue at an alarming rate. In fact, given the rapid evolution of artificial intelligence (AI), ransomware is getting easier to launch and is impacting more victims than ever before. According to reporting from The Hill, criminals stole more than $1 billion from U.S. organizations in 2023, which is the highest amount on record and represents a 70 percent increase in the number of victims over 2022.
As a result, there are growing calls from regulators around the world to change the risk equation. An example is the 2023 U.S. National Cybersecurity Strategy, which argues that “[w]e must hold the stewards of our data accountable for the protection of personal data; drive the development of more secure connected devices; and reshape laws that govern liability for data losses and harm caused by cybersecurity errors, software vulnerabilities, and other risks created by software and digital technologies.” This sentiment represents nothing less than a repudiation of the “Patch Tuesday” mentality and with it the willingness to put the onus on end users for the cybersecurity failings of software vendors. The Biden administration, instead, is promoting a view that shifts “liability onto those entities that fail to take reasonable precautions to secure their software.”
What exact form such liability should take is up for debate. Products liability law and the defect model is one clear option, and courts across the United States have already been applying it using both strict liability and risk utility framings in a variety of cases including litigation related to accidents involving Teslas. In considering this idea, here we argue that it is important to learn from the European Union context, which has long been a global leader in tech governance even at the risk of harming innovation. Most recently, the EU has agreed to reform its Product Liability Directive to include software. When combined with other developments, we are seeing a new liability regime crystallize that incorporates accountability, transparency, and secure-by-design concepts. This new regime has major implications both for U.S. firms operating in Europe and for U.S. policymakers charting a road ahead.
The EU’s various levers to shape software liability, and more broadly the privacy and cybersecurity landscape, are instructive in a number of ways in helping to chart possible paths ahead, and each is deserving of regime effectiveness research to gauge their respective utility. These include:
Extending Products Liability to Include Cybersecurity Failings: Following the EU’s lead in expanding the definition of “product” to include software and its updates, U.S. policymakers could explore extending traditional products liability to cover losses due to cybersecurity breaches. This would align incentives for businesses to maintain robust cybersecurity practices and offer clearer legal recourse for consumers affected by such failings.
Adopting a “Secure by Design” Approach: New EU legislation, such as the Cyber Resilience Act, mandates that products be secure from the outset. U.S. policy could benefit from similar regulations that require cybersecurity to be an integral part of the design process for all digital products. This would shift some responsibility away from end users to manufacturers, promoting a proactive rather than reactive approach to cybersecurity.
Enhancing Transparency and Accountability Through Regulatory Frameworks: Inspired by the EU’s comprehensive regulatory measures like the General Data Protection Regulation (GDPR) and the AI Act discussed below, the U.S. could benefit from creating or strengthening frameworks that enforce transparency and accountability in data handling and cybersecurity. Building on the recent guidance from the U.S. Securities and Exchange Commission that requires publicly traded companies to report material cybersecurity incidents within four days, this could include potential requirements for risk assessments, incident disclosures, and a systematic approach to managing cyber risks across all sectors, not just critical infrastructure.
Each of these themes is explored in turn.
Extending Products Liability to Include Cybersecurity Failings
The EU has taken a more detailed, and broader, approach to imposing liability on software developers than what has commonly been argued for in the U.S. context.
In a recognition that many products, from toasters to cars, have gotten increasingly “smart,” the EU began a process in 2022 to update its products liability regime, which had been in place and largely unchanged since 1985. As such, reforms agreed to under the Product Liability Framework include an expansion of what’s considered a “product” to cover not just hardware, but also stand-alone software such as firmware, applications, and computer programs along with AI systems. Exceptions are applicable for certain free and open-source software, which has long been an area of concern for proponents of more robust software liability regimes.
Relatedly, the concept of “defect” has been expanded to include cybersecurity vulnerabilities, including a failure to patch. The notion of what constitutes “reasonable” cybersecurity in this context, such as a product that does not provide the expected level of service, builds on other EU acts and directives, discussed below.
Recovered damages have also broadened to include the destruction or corruption of data, along with mental health impacts following a breach. Covered businesses can also include internet platforms with the intent being that there is always an “EU-based business that can be held liable.” Even resellers who substantially modify products and put them back into the stream of commerce may be held liable. It’s now also easier for Europeans to prove their claims through the introduction of a more robust U.S.-style discovery process and class actions, along with easing the burden of proof on claimants and extending the covered period from 10 to 25 years in some cases.
Although the EU has long been a global leader on data governance and products liability, the same has not necessarily been the case for cybersecurity—particularly pertaining to critical infrastructure protection. In 2016, the EU worked to change that through the introduction of the Network Information Security (NIS) Directive, which was updated in 2023 as NIS2.
Among other things, NIS2 expanded the scope of coverage to new “essential” and “important” sectors including cloud and digital marketplace providers, required EU member states to designate Computer Security Incident Response Teams (CSIRTs) and join Cooperation Groups, which are in essence international information sharing and analysis centers, or ISACs. Covered businesses must take “appropriate” steps to safeguard their networks, secure their supply chains, and notify national authorities in the event of a breach.
In sum, NIS2 regulates software in a manner more familiar in the U.S. context, relying on information sharing and a risk management approach to standardize common activities like incident reporting.
Further, the European Union’s Cybersecurity Act, which took effect in June 2019, establishes a comprehensive framework for the certification of cybersecurity across information and communications technology products, services, and processes. The regulation aims to bolster trust in the digital market by ensuring that these entities adhere to standardized cybersecurity criteria. This certification scheme is voluntary, but it affects manufacturers and service providers by enabling them to demonstrate their compliance with high levels of cybersecurity, thereby enhancing market perception and consumer trust in their offerings. The act fits within the broader EU strategy of leveraging regulatory measures over direct state control, epitomized by the role of European Union Agency for Cybersecurity (ENISA). ENISA has become a major entity in shaping and supporting the cybersecurity landscape across the EU, despite facing challenges in establishing its authority and influence.
From a products liability perspective, the Cybersecurity Act shifts the landscape by integrating cybersecurity into the core criteria for product safety and performance evaluations. By adhering to established certification standards, companies not only mitigate the risks of cyber threats but also reduce potential legal liabilities associated with cybersecurity failures. The act encourages transparency and accountability in cybersecurity practices, pushing companies to proactively manage and disclose cyber risks, which can influence their liability in cases of cyber breaches.
This approach aligns with the EU’s broader regulatory security state model, which emphasizes governance through regulation and expertise rather than through direct governmental intervention. This model is characterized by the deployment of indirect regulatory tools and reliance on the expertise and performance of various stakeholders to manage security issues, rather than solely depending on direct state power and authority. The voluntary standards have posed challenges, leading to uneven adoption and vulnerabilities in products not compliant with these standards and minimum security objectives for organizations. Nevertheless, some studies have commented that at least the act has helped the European Union behave in a coordinated way.
Adopting a “Secure by Design” Approach
In addition to the proposal to include software within the scope of products liability legislation, the EU has introduced unified cybersecurity requirements for products sold within the common market, which includes pure software products. The Cyber Resilience Act (CRA), a forthcoming EU regulation, combines detailed cybersecurity requirements, such as patch management and secure-by-design principles, with a comprehensive liability regime. The CRA can be considered as more comprehensive than California’s “Internet of Things” (IoT) security law as the CRA’s cybersecurity requirements go far beyond California’s reasonable security features and password requirements, and the CRA applies to both IoT and software products.
Fundamentally, the CRA requires that products be introduced to the market with all known vulnerabilities patched and that they have been developed under a “secure by design” basis. However, developers are also required to conduct and maintain a cybersecurity risk assessment, provide a software bill of materials listing out the third-party software components used in their products, and ensure security updates are available for a period of at least five years. Developers and manufacturers of ordinary products can self-certify conformity with the legislation while “important” and “critical” products will require a more in-depth and an independent conformity assessment, respectively.
Noncompliance with the CRA follows the model used in the GDPR and can result in a fine of up to 15 million euros or 2.5 percent of total revenue (whichever is larger) for breaches of core requirements, while other breaches can result in a fine of up to 10 million euros or 2 percent of total revenue. However, there is no mechanism under the act for a complainant to enforce the CRA directly, and complainants must petition their local regulator if they believe the requirements have not been met.
Enhancing Transparency and Accountability Through Regulatory Frameworks
The EU’s AI Act introduces a regulatory framework to protect users from harms caused by the failure of an AI system in the name of safety and transparency. The act classifies AI systems into three categories—prohibited, high-risk, and non-high-risk—and is reminiscent of the CRA in its comprehensive scope. Prohibited applications, such as those involving subliminal techniques or social scoring, are banned within the EU. High-risk applications, which include medical devices and credit scoring systems, must adhere to stringent requirements, including maintaining a risk management system, ensuring human oversight, and registering in the EU’s database of high-risk AI systems. Non-high-risk applications face minimal to no regulatory obligations.
The act also addresses general purpose AI models, like foundation and large language models, imposing obligations similar to those for high-risk systems. These include maintaining a copyright policy and publishing a summary of the training data. Enforcement is managed by domestic regulators and coordinated at the EU level by the newly established European Artificial Intelligence Board and the European Office for AI, where complaints can also be lodged against noncompliant AI providers.
There are penalties for noncompliance. Violations involving prohibited AI can result in fines up to 30.3 million euros or 7 percent of total revenue. High-risk AI breaches may lead to fines of up to 15.14 million euros or 3 percent of total revenue, and providing misleading information to regulators can attract fines up to 7.5 million euros or 1.5 of total revenue. The applicable fine, higher or lower, depends on whether the entity is a large corporation or a small to medium-sized enterprise. One of the major limitations in the EU’s AI liability regime, however, exists in its broad categorization of risk. In reality, there are many different dimensions of risk, let alone the definition of fairness in AI systems. In particular, “explainability” and “interpretability” of AI systems are often used interchangeably, and that language will make it difficult to enforce and promote trustworthy AI practices.
In the event that a user is harmed following their use of a high-risk AI system, they will be able to benefit from a proposed companion directive, which introduces additional civil liability requirements for AI systems. Under the proposed directive, the user will be able to seek a court order compelling the provider of the AI system to disclose relevant evidence relating to the suspected harm.
However, the claimant will be required to demonstrate to the relevant court that the provider has failed to comply with its obligations under the AI Act in order for their claim to succeed. Harm that occurs to the claimant despite the provider meeting its obligations under the AI Act is not recoverable under this legislation.
This approach, as is the case with data privacy in the EU context, is far more comprehensive than the Biden administration’s AI executive order and sets out accountability and transparency rules that are already shaping global AI governance.
As with the AI Act, the General Data Protection Regulation is a comprehensive data protection law. It came into effect in the European Union on May 25, 2018, aiming to empower individuals with sovereignty over their personal data and simplify the regulatory environment for business. In particular, the GDPR requires that companies that process personal data be accountable for handling it securely and responsibly. This includes ensuring that data processing is lawful, fair, transparent, and limited to the purposes for which it was collected. Product and service providers must disclose their data processing practices and seek explicit consent from users in many cases, making them directly liable for noncompliance. The GDPR also gives individuals the option of demanding that a company delete their personal data or transfer it to another provider.
Although there are penalties for noncompliance for both primary data controllers and potential third parties, it has been very difficult to enforce and prove liability. For example, the European Union’s own internal analysis has explained how international data cooperation has been challenging due to factors like “lack of practice, shortcomings in the legal framework, and problems in producing evidence.” Furthermore, since consumers often are searching for specific information and do not have other options, they simply consent to the relevant disclaimers on a site to enter and never think twice about the data that was shared and/or the possibility of filing a lawsuit against a company for potential damages from, say, a data breach.
Furthermore, empirical studies generally point toward a negative effect of the GDPR on economic activity and innovation. Some studies have found that the GDPR led to a decline in new venture funding and new ventures, particularly in more data-intensive and business-to-consumer sectors. Others found that companies exposed to the GDPR incurred an 8 percent reduction in profits and a 2 percent decrease in sales, concentrated particularly among small and medium-sized enterprises. There is additional evidence that the GDPR led to a 15 percent decline in web traffic and a decrease in engagement rates on websites.
Finally, the Digital Services Act (DSA) “regulates online intermediaries and platforms such as marketplaces, social networks, content-sharing platforms, app stores, and online travel and accommodation platforms.” It took effect in a staggered process in 2022 and promised risk reduction, democratic oversight, and improvement of online rights. Articles 6(1), 9(1), and 22 of the DSA could be significant after cyberattacks, while Articles 17 through 21 could be crucial protections for users of online platforms whose accounts are suspended or terminated due to intrusions or misuse attributable to cyber threats. Article 9(1) obliges certain platforms to remove illegal material upon being served with notice of specific items by “judicial or administrative authorities.” Regarding online dangers other than intellectual property infringement and incitement to violence, Recital 12 of the DSA references “stalking” and “the unlawful non-consensual sharing of private images.”
In the United States, the law on loss of access to online accounts remains a patchwork, even in cases involving data breaches covered by federal statutes. While some courts allow breach of express or implied contract as a theory of recovery, others may not, and arbitration clauses are a formidable challenge in some cases. Articles 20(4) and 21 of the DSA strengthen the right to use online platforms and not to suffer arbitrary deprivation of access.
Settlements of class actions like those involving iPhone battery life and Google Chrome incognito mode do suggest that defective software and misleading marketing of technology claims have traction in U.S. courts without further reforms. Products liability and data security litigation remains viable due to the similarity of many U.S. states’ laws and the intention of the federal class-action procedure to make asserting small-dollar claims economical.
Lessons for Policymakers
A natural question is whether Europe has taken a more active regulatory approach because its technology sector is much smaller. While having a smaller technology sector in Europe inevitably means that there are different political economy dynamics, including lower returns to lobbying, there is nonetheless a growing recognition that the absence of clearer guidelines and regulations is a lose-lose situation in the long run. For instance, a voluminous body of empirical literature documents a rise in concentration and market power, particularly among digital intermediaries, and that could be attributed to lax and ambiguous guidelines. Only recently has the U.S. Securities and Exchange Commission introduced guidance requiring that public companies report data breaches four business days after the incident is determined material.
The EU’s efforts to extend products liability law to software, adopt a secure by design approach similar to that called for in the 2023 U.S. National Cybersecurity Strategy, and enhance transparency and accountability across the digital ecosystem have solidified its place as a global leader in tech governance.
Several of these steps could be taken at once perhaps as part of the proposed American Privacy Rights Act, which would offer enhanced powers to the Federal Trade Commission to investigate deceptive or defective products, and establish baseline privacy and cybersecurity expectations for American consumers.
At the highest level, if a products liability approach in the U.S. context is to be successful, Congress would need to introduce a package of reforms that would address various barriers to recovery, including the economic loss doctrine and the enforcement of liability waivers. Moreover, the array of EU initiatives surveyed above still give rise to uncertainty, such as a potential cap of 70 million euros on all claims for a specific defective item. And costs should not be underestimated—one U.S. House of Representatives Oversight and Accountability Committee report claimed 20.4 to 46.4 billion euros in new compliance and operation costs introduced by the DSA and the GDPR. Still, such estimates should be weighed against the staggering economic harm introduced by software vulnerabilities discussed above.
A best-case scenario would be for policymakers on both sides of the Atlantic, and beyond, to come together and find common ground to encourage the convergence of baseline software security expectations. This process could either be kicked off through a special event, such as a Global Responsible Software Summit modeled after recent ransomware and democracy summits, or be added to an upcoming major gathering.
No nation is an island in cyberspace, no matter how much some wish they were. How leading cyber powers—including the EU and the U.S.—approach the issue of software liability will make worldwide ripples, which, depending on how these interventions are crafted, could turn into a tsunami.
Bolstered by Faith
This article was originally published in City Journal.
The Covid-19 pandemic challenged much more than health-care systems—it also tested communities’ resilience, particularly their ability to handle economic shocks. While we often look to fiscal policy or other government actions to explain localities’ economic outcomes, an overlooked factor plays a significant role: religiosity.
Baylor University professor Byron Johnson and I researched whether communities with higher levels of religiosity before the pandemic fared better economically during the crisis. Comparing data from the Quarterly Census of Employment and Wages between 2019 and 2023 with religiosity levels from the Religion Census in 2010 and 2020, we found that communities where religious adherence had grown over the decade showed notably better employment and business-establishment trends during the pandemic years.
Why would religiosity matter so much in times of economic hardship? Religious communities often act as social safety nets, creating strong networks of moral and practical support. The solidarity this creates can help cushion the blow during economic downturns. Faithful communities often rally in hard times, providing assistance to those in need, from food banks to job networking. Their collective resilience supports not just individual believers but also the broader community infrastructure.
This truth doesn’t just hold for traditionally religious communities. Any community with strong social bonds and a shared sense of purpose, whether based around religious faith or other collective beliefs and practices, can show similar resolve. Weathering a crisis is about more than beliefs—it’s about the networks and mutual support those beliefs foster.
Our study suggests that shared faith and participation contribute significantly to local economies, helping them bounce back faster and stronger. These findings are particularly relevant for policymakers and community leaders. Investing in community-building efforts and supporting faith-based and other local organizations can be an effective strategy to develop economic resilience. This approach can also serve as preparation for managing future crises.
The interplay between religious observance and economic resilience highlights the need for a broader understanding of what strengthens communities and makes them capable of collective response. In a world where economic shocks are inevitable, recognizing and bolstering these networks could be essential to enduring future challenges.
The Silent Office
This article was originally published in City Journal.
The increasing politicization of American life has profound implications not only for social harmony but also for markets and the workplace. Polarization is changing office dynamics, as workers find it increasingly difficult to navigate religious and political issues on the job.
My recent article in the Journal of Economics, Management and Religion presents the results of a new, nationally representative Ipsos survey. The data illustrate a concerning trend: a large share of the American workforce is reticent to express personal views on social and political issues on the job, fearing repercussions that could stymie their career advancement. The survey found that roughly 42 percent of employees have withheld their opinions to protect their professional future, reflecting Americans’ deep-seated fear that self-expression could put their jobs in jeopardy.
That anxiety may be tied to the surging number of instances of religious and political discrimination. Indeed, the Employment and Opportunity Commission reported a 73 percent rise in religious-based discrimination charges between 1992 and 2020. The reality behind that statistic—a potentially substantial increase in unjust discrimination against religious employees—may have contributed to workers’ growing unease.
The effects of discrimination and self-censorship in the workplace bleed into the labor market. Would-be employees are unlikely to want to work for an intolerant employer. Some 40 percent of survey respondents, for example, indicated that they are less likely to apply to a company they perceive as being hostile to their beliefs. Such perceptions can affect productivity, too, making current employees less loyal to their employers and potentially lowering worker engagement. The broader trend of workers being willing to relocate for jobs that more closely align with their moral and political compass—often for significantly lower pay—underscores a desire for personal integrity, and the extent to which many feel that their current workplace stifles this aim.
Widening polarization also affects consumer behavior. Many Americans are willing to change their consumption habits based on brands’ political and social stances. Fifty-six percent of survey respondents said that they are likely to cease purchasing from brands that oppose their values, with 30 percent claiming already to have done so. This indicates buyers’ desire to support companies with similar values and demonstrates the deep connection between political and social identity and consumer loyalty.
While companies may feel pressured to take stands on issues, the data suggest that companies should be wary, or at least aware, that many consumers will switch to competitors in response to political posturing. These results challenge companies to focus on delivering value. Businesses need to find a balance that respects diverse viewpoints without compromising their principles.