Remote work does make more time for work-life balance. Here’s the data
This article was originally published in Fast Company.
There’s no debate that remote work is here to stay, but the question is what shape it takes and how different types of workers are changing the amount of time they work. My new research shows that remote workers have reduced their time at work and increased their time in leisure substantially between 2019 to 2023, and these trends continued into 2023.
How to measure time spent working
To explore the effects of this shift, I used data from the American Time Use Survey (ATUS), restricted to employed, full-time respondents aged 25 to 65, providing a comprehensive measurement of time allocated to various activities with minimal measurement error.
This data set offers more reliable insights than other labor supply surveys, such as the Current Population Survey or the American Community Survey, due to reduced recall bias. Furthermore, the analysis focused on non-self-employed workers, as their occupational classification is clearer.
One of the major benefits of the ATUS is that it measures a wide array of activities, not just time at work, like many existing surveys, allowing me to track time across work, leisure, household chores, childcare, and more. Leisure is defined as time spent socializing, passive and active leisure, volunteering, pet care, and gardening.
Subscribe to the Compass Newsletter.Fast Company's trending stories delivered to you daily
SIGN UP
|
Since the ATUS collects detailed 24-hour time diaries in which respondents report all the activities from the previous day in time intervals, the records are also more reliable than standard measures of time allocated to work available in other federal data sets that require respondents to recall how much they worked over the previous year or week. These diaries contain much less noise than typical survey results.
To measure remote work, I used the “remotability” index from professors Jonathan Dingel and Brent Neiman’s 2020 paper in the Journal of Public Economics, which is based on the Department of Labor’s O*NET task-level data on how many tasks in an occupation can be done remotely.
Updated 2023 time use patterns
My recently updated research compares how people have changed their time allocation between 2019 and 2023 when they work in remote jobs versus hybrid or on-site.
While remote workers tend to spend 28 minutes more per day than their non-remote counterparts in 2019, remote workers drastically reduced their time at work over the pandemic: 32 minutes per day less in 2020, 41 minutes less in 2021, 57 minutes less in 2022, and 35 minutes less in 2023. Conversely, workers in more remote jobs allocated more time toward leisure over these years.
Do these changes in time use simply reflect differences in the type of worker within remote jobs over time? All my results control for demographic factors, such as age and education, hourly wages, and differences across industries and occupations. I also studied changes in the composition of more versus less remote jobs over these years, finding minimal differences.
Interestingly, the time spent on home production (household chores, caring for household members) and shopping did not increase significantly. What did, however, is a slight increase in other activities not otherwise classified. Overall, remote workers are not merely reallocating their work time to household activities but are genuinely engaging in more leisure.
Labor and leisure changes varied across different demographic groups, and these trends diverged in some cases in 2023. In particular, men reduced their time at work by 33 minutes per day in 2021 and 58 minutes per day in 2022, relative to 2019, but only 13 minutes per day in 2023.
In contrast, women reduced their time at work over these years, but that intensified in 2023 when they reduced their time at work even more by 76 minutes per day, relative to 2019. Singles and those without children also showed a steeper decline in labor and an increase in leisure activities, though the effects were less significant in 2023.
Crucially, these declines in time at work are not driven by the role of commuting. While commute times have declined overall, the bulk of the decline in time at work has been driven by actual time working—not commuting.
These results provide insights into the ongoing debate about quiet quitting, where employees may be dissatisfied with their jobs and reduce their efforts. While the data does not directly measure job satisfaction, ATUS includes some information on subjective well-being. Remote workers reported slightly higher life satisfaction and felt more rested than their in-office counterparts in 2021, suggesting they are not necessarily quiet quitting. Instead, they might be reallocating their time to activities they prefer, enabled by the flexibility of remote work.
Understanding why
Why such a large change in how much men and women work in 2023? Women who typically work in remote intensive jobs work 46 minutes per day more than their less remote counterparts, consistent with my writing last year that did not yet include 2023 data. However, the decline in time at work accelerated more in 2023 for women in remote jobs.
One possibility is that more women have reported burnout than men, although they decreased their time at work in 2022. Another possibility is that there has been some switching back to in-person work, which may be harder to do for some women who have more caregiving responsibilities, particularly in light of prior research of mine with Chris Herbst and Ali Umair documenting the significant effect that state regulations had on the availability of childcare during the pandemic. In this sense, men in more remote intensive jobs may have returned to work more in 2023 with their hybrid jobs, whereas women did not.
I find that time spent in childcare for women with children grows by 28 minutes per day in 2022, but only 15 minutes per day in 2023, relative to 2019. While the same patterns do not exist for men, and are part of the explanation, it is not the full story.
Could preferences over in-person versus remote work explain the differences in time use? In companion work, I found that men dislike full working-from-home arrangements, whereas I do not see the same for women.
The data also suggests a preference toward remote work among women, so if these jobs have fewer opportunities and/or requirements around work, then the gradual return to office would result in fewer hours.
Coupled with all of these patterns are findings from Gallup surveys, which show us that only 33% of employees are engaged, so burnout—or low engagement— may be a very real phenomenon.
What are the consequences on productivity? Time will tell, but the positive effects of greater flexibility may have offset the negative effect of lower labor. However, much more work, including understanding differences in time use among all genders, still has to be done.
Moving Slow and Fixing Things
This article was originally published on Lawfare with Iain Nash, Scott J. Shackelford, Hannibal Travis.
Silicon Valley, and the U.S. tech sector more broadly, has changed the world in part by embracing a “move fast and break things” mentality that Mark Zuckerberg popularized but that pervaded the industry long before he founded FaceMash in his Harvard dorm room. Consider that Microsoft introduced “Patch Tuesday” in 2003, which began a monthly process of updating buggy code that has continued for more than 20 years.
While it is true that the tech sector has attempted to break with such a reactive and flippant response to security concerns, cyberattacks continue at an alarming rate. In fact, given the rapid evolution of artificial intelligence (AI), ransomware is getting easier to launch and is impacting more victims than ever before. According to reporting from The Hill, criminals stole more than $1 billion from U.S. organizations in 2023, which is the highest amount on record and represents a 70 percent increase in the number of victims over 2022.
As a result, there are growing calls from regulators around the world to change the risk equation. An example is the 2023 U.S. National Cybersecurity Strategy, which argues that “[w]e must hold the stewards of our data accountable for the protection of personal data; drive the development of more secure connected devices; and reshape laws that govern liability for data losses and harm caused by cybersecurity errors, software vulnerabilities, and other risks created by software and digital technologies.” This sentiment represents nothing less than a repudiation of the “Patch Tuesday” mentality and with it the willingness to put the onus on end users for the cybersecurity failings of software vendors. The Biden administration, instead, is promoting a view that shifts “liability onto those entities that fail to take reasonable precautions to secure their software.”
What exact form such liability should take is up for debate. Products liability law and the defect model is one clear option, and courts across the United States have already been applying it using both strict liability and risk utility framings in a variety of cases including litigation related to accidents involving Teslas. In considering this idea, here we argue that it is important to learn from the European Union context, which has long been a global leader in tech governance even at the risk of harming innovation. Most recently, the EU has agreed to reform its Product Liability Directive to include software. When combined with other developments, we are seeing a new liability regime crystallize that incorporates accountability, transparency, and secure-by-design concepts. This new regime has major implications both for U.S. firms operating in Europe and for U.S. policymakers charting a road ahead.
The EU’s various levers to shape software liability, and more broadly the privacy and cybersecurity landscape, are instructive in a number of ways in helping to chart possible paths ahead, and each is deserving of regime effectiveness research to gauge their respective utility. These include:
Extending Products Liability to Include Cybersecurity Failings: Following the EU’s lead in expanding the definition of “product” to include software and its updates, U.S. policymakers could explore extending traditional products liability to cover losses due to cybersecurity breaches. This would align incentives for businesses to maintain robust cybersecurity practices and offer clearer legal recourse for consumers affected by such failings.
Adopting a “Secure by Design” Approach: New EU legislation, such as the Cyber Resilience Act, mandates that products be secure from the outset. U.S. policy could benefit from similar regulations that require cybersecurity to be an integral part of the design process for all digital products. This would shift some responsibility away from end users to manufacturers, promoting a proactive rather than reactive approach to cybersecurity.
Enhancing Transparency and Accountability Through Regulatory Frameworks: Inspired by the EU’s comprehensive regulatory measures like the General Data Protection Regulation (GDPR) and the AI Act discussed below, the U.S. could benefit from creating or strengthening frameworks that enforce transparency and accountability in data handling and cybersecurity. Building on the recent guidance from the U.S. Securities and Exchange Commission that requires publicly traded companies to report material cybersecurity incidents within four days, this could include potential requirements for risk assessments, incident disclosures, and a systematic approach to managing cyber risks across all sectors, not just critical infrastructure.
Each of these themes is explored in turn.
Extending Products Liability to Include Cybersecurity Failings
The EU has taken a more detailed, and broader, approach to imposing liability on software developers than what has commonly been argued for in the U.S. context.
In a recognition that many products, from toasters to cars, have gotten increasingly “smart,” the EU began a process in 2022 to update its products liability regime, which had been in place and largely unchanged since 1985. As such, reforms agreed to under the Product Liability Framework include an expansion of what’s considered a “product” to cover not just hardware, but also stand-alone software such as firmware, applications, and computer programs along with AI systems. Exceptions are applicable for certain free and open-source software, which has long been an area of concern for proponents of more robust software liability regimes.
Relatedly, the concept of “defect” has been expanded to include cybersecurity vulnerabilities, including a failure to patch. The notion of what constitutes “reasonable” cybersecurity in this context, such as a product that does not provide the expected level of service, builds on other EU acts and directives, discussed below.
Recovered damages have also broadened to include the destruction or corruption of data, along with mental health impacts following a breach. Covered businesses can also include internet platforms with the intent being that there is always an “EU-based business that can be held liable.” Even resellers who substantially modify products and put them back into the stream of commerce may be held liable. It’s now also easier for Europeans to prove their claims through the introduction of a more robust U.S.-style discovery process and class actions, along with easing the burden of proof on claimants and extending the covered period from 10 to 25 years in some cases.
Although the EU has long been a global leader on data governance and products liability, the same has not necessarily been the case for cybersecurity—particularly pertaining to critical infrastructure protection. In 2016, the EU worked to change that through the introduction of the Network Information Security (NIS) Directive, which was updated in 2023 as NIS2.
Among other things, NIS2 expanded the scope of coverage to new “essential” and “important” sectors including cloud and digital marketplace providers, required EU member states to designate Computer Security Incident Response Teams (CSIRTs) and join Cooperation Groups, which are in essence international information sharing and analysis centers, or ISACs. Covered businesses must take “appropriate” steps to safeguard their networks, secure their supply chains, and notify national authorities in the event of a breach.
In sum, NIS2 regulates software in a manner more familiar in the U.S. context, relying on information sharing and a risk management approach to standardize common activities like incident reporting.
Further, the European Union’s Cybersecurity Act, which took effect in June 2019, establishes a comprehensive framework for the certification of cybersecurity across information and communications technology products, services, and processes. The regulation aims to bolster trust in the digital market by ensuring that these entities adhere to standardized cybersecurity criteria. This certification scheme is voluntary, but it affects manufacturers and service providers by enabling them to demonstrate their compliance with high levels of cybersecurity, thereby enhancing market perception and consumer trust in their offerings. The act fits within the broader EU strategy of leveraging regulatory measures over direct state control, epitomized by the role of European Union Agency for Cybersecurity (ENISA). ENISA has become a major entity in shaping and supporting the cybersecurity landscape across the EU, despite facing challenges in establishing its authority and influence.
From a products liability perspective, the Cybersecurity Act shifts the landscape by integrating cybersecurity into the core criteria for product safety and performance evaluations. By adhering to established certification standards, companies not only mitigate the risks of cyber threats but also reduce potential legal liabilities associated with cybersecurity failures. The act encourages transparency and accountability in cybersecurity practices, pushing companies to proactively manage and disclose cyber risks, which can influence their liability in cases of cyber breaches.
This approach aligns with the EU’s broader regulatory security state model, which emphasizes governance through regulation and expertise rather than through direct governmental intervention. This model is characterized by the deployment of indirect regulatory tools and reliance on the expertise and performance of various stakeholders to manage security issues, rather than solely depending on direct state power and authority. The voluntary standards have posed challenges, leading to uneven adoption and vulnerabilities in products not compliant with these standards and minimum security objectives for organizations. Nevertheless, some studies have commented that at least the act has helped the European Union behave in a coordinated way.
Adopting a “Secure by Design” Approach
In addition to the proposal to include software within the scope of products liability legislation, the EU has introduced unified cybersecurity requirements for products sold within the common market, which includes pure software products. The Cyber Resilience Act (CRA), a forthcoming EU regulation, combines detailed cybersecurity requirements, such as patch management and secure-by-design principles, with a comprehensive liability regime. The CRA can be considered as more comprehensive than California’s “Internet of Things” (IoT) security law as the CRA’s cybersecurity requirements go far beyond California’s reasonable security features and password requirements, and the CRA applies to both IoT and software products.
Fundamentally, the CRA requires that products be introduced to the market with all known vulnerabilities patched and that they have been developed under a “secure by design” basis. However, developers are also required to conduct and maintain a cybersecurity risk assessment, provide a software bill of materials listing out the third-party software components used in their products, and ensure security updates are available for a period of at least five years. Developers and manufacturers of ordinary products can self-certify conformity with the legislation while “important” and “critical” products will require a more in-depth and an independent conformity assessment, respectively.
Noncompliance with the CRA follows the model used in the GDPR and can result in a fine of up to 15 million euros or 2.5 percent of total revenue (whichever is larger) for breaches of core requirements, while other breaches can result in a fine of up to 10 million euros or 2 percent of total revenue. However, there is no mechanism under the act for a complainant to enforce the CRA directly, and complainants must petition their local regulator if they believe the requirements have not been met.
Enhancing Transparency and Accountability Through Regulatory Frameworks
The EU’s AI Act introduces a regulatory framework to protect users from harms caused by the failure of an AI system in the name of safety and transparency. The act classifies AI systems into three categories—prohibited, high-risk, and non-high-risk—and is reminiscent of the CRA in its comprehensive scope. Prohibited applications, such as those involving subliminal techniques or social scoring, are banned within the EU. High-risk applications, which include medical devices and credit scoring systems, must adhere to stringent requirements, including maintaining a risk management system, ensuring human oversight, and registering in the EU’s database of high-risk AI systems. Non-high-risk applications face minimal to no regulatory obligations.
The act also addresses general purpose AI models, like foundation and large language models, imposing obligations similar to those for high-risk systems. These include maintaining a copyright policy and publishing a summary of the training data. Enforcement is managed by domestic regulators and coordinated at the EU level by the newly established European Artificial Intelligence Board and the European Office for AI, where complaints can also be lodged against noncompliant AI providers.
There are penalties for noncompliance. Violations involving prohibited AI can result in fines up to 30.3 million euros or 7 percent of total revenue. High-risk AI breaches may lead to fines of up to 15.14 million euros or 3 percent of total revenue, and providing misleading information to regulators can attract fines up to 7.5 million euros or 1.5 of total revenue. The applicable fine, higher or lower, depends on whether the entity is a large corporation or a small to medium-sized enterprise. One of the major limitations in the EU’s AI liability regime, however, exists in its broad categorization of risk. In reality, there are many different dimensions of risk, let alone the definition of fairness in AI systems. In particular, “explainability” and “interpretability” of AI systems are often used interchangeably, and that language will make it difficult to enforce and promote trustworthy AI practices.
In the event that a user is harmed following their use of a high-risk AI system, they will be able to benefit from a proposed companion directive, which introduces additional civil liability requirements for AI systems. Under the proposed directive, the user will be able to seek a court order compelling the provider of the AI system to disclose relevant evidence relating to the suspected harm.
However, the claimant will be required to demonstrate to the relevant court that the provider has failed to comply with its obligations under the AI Act in order for their claim to succeed. Harm that occurs to the claimant despite the provider meeting its obligations under the AI Act is not recoverable under this legislation.
This approach, as is the case with data privacy in the EU context, is far more comprehensive than the Biden administration’s AI executive order and sets out accountability and transparency rules that are already shaping global AI governance.
As with the AI Act, the General Data Protection Regulation is a comprehensive data protection law. It came into effect in the European Union on May 25, 2018, aiming to empower individuals with sovereignty over their personal data and simplify the regulatory environment for business. In particular, the GDPR requires that companies that process personal data be accountable for handling it securely and responsibly. This includes ensuring that data processing is lawful, fair, transparent, and limited to the purposes for which it was collected. Product and service providers must disclose their data processing practices and seek explicit consent from users in many cases, making them directly liable for noncompliance. The GDPR also gives individuals the option of demanding that a company delete their personal data or transfer it to another provider.
Although there are penalties for noncompliance for both primary data controllers and potential third parties, it has been very difficult to enforce and prove liability. For example, the European Union’s own internal analysis has explained how international data cooperation has been challenging due to factors like “lack of practice, shortcomings in the legal framework, and problems in producing evidence.” Furthermore, since consumers often are searching for specific information and do not have other options, they simply consent to the relevant disclaimers on a site to enter and never think twice about the data that was shared and/or the possibility of filing a lawsuit against a company for potential damages from, say, a data breach.
Furthermore, empirical studies generally point toward a negative effect of the GDPR on economic activity and innovation. Some studies have found that the GDPR led to a decline in new venture funding and new ventures, particularly in more data-intensive and business-to-consumer sectors. Others found that companies exposed to the GDPR incurred an 8 percent reduction in profits and a 2 percent decrease in sales, concentrated particularly among small and medium-sized enterprises. There is additional evidence that the GDPR led to a 15 percent decline in web traffic and a decrease in engagement rates on websites.
Finally, the Digital Services Act (DSA) “regulates online intermediaries and platforms such as marketplaces, social networks, content-sharing platforms, app stores, and online travel and accommodation platforms.” It took effect in a staggered process in 2022 and promised risk reduction, democratic oversight, and improvement of online rights. Articles 6(1), 9(1), and 22 of the DSA could be significant after cyberattacks, while Articles 17 through 21 could be crucial protections for users of online platforms whose accounts are suspended or terminated due to intrusions or misuse attributable to cyber threats. Article 9(1) obliges certain platforms to remove illegal material upon being served with notice of specific items by “judicial or administrative authorities.” Regarding online dangers other than intellectual property infringement and incitement to violence, Recital 12 of the DSA references “stalking” and “the unlawful non-consensual sharing of private images.”
In the United States, the law on loss of access to online accounts remains a patchwork, even in cases involving data breaches covered by federal statutes. While some courts allow breach of express or implied contract as a theory of recovery, others may not, and arbitration clauses are a formidable challenge in some cases. Articles 20(4) and 21 of the DSA strengthen the right to use online platforms and not to suffer arbitrary deprivation of access.
Settlements of class actions like those involving iPhone battery life and Google Chrome incognito mode do suggest that defective software and misleading marketing of technology claims have traction in U.S. courts without further reforms. Products liability and data security litigation remains viable due to the similarity of many U.S. states’ laws and the intention of the federal class-action procedure to make asserting small-dollar claims economical.
Lessons for Policymakers
A natural question is whether Europe has taken a more active regulatory approach because its technology sector is much smaller. While having a smaller technology sector in Europe inevitably means that there are different political economy dynamics, including lower returns to lobbying, there is nonetheless a growing recognition that the absence of clearer guidelines and regulations is a lose-lose situation in the long run. For instance, a voluminous body of empirical literature documents a rise in concentration and market power, particularly among digital intermediaries, and that could be attributed to lax and ambiguous guidelines. Only recently has the U.S. Securities and Exchange Commission introduced guidance requiring that public companies report data breaches four business days after the incident is determined material.
The EU’s efforts to extend products liability law to software, adopt a secure by design approach similar to that called for in the 2023 U.S. National Cybersecurity Strategy, and enhance transparency and accountability across the digital ecosystem have solidified its place as a global leader in tech governance.
Several of these steps could be taken at once perhaps as part of the proposed American Privacy Rights Act, which would offer enhanced powers to the Federal Trade Commission to investigate deceptive or defective products, and establish baseline privacy and cybersecurity expectations for American consumers.
At the highest level, if a products liability approach in the U.S. context is to be successful, Congress would need to introduce a package of reforms that would address various barriers to recovery, including the economic loss doctrine and the enforcement of liability waivers. Moreover, the array of EU initiatives surveyed above still give rise to uncertainty, such as a potential cap of 70 million euros on all claims for a specific defective item. And costs should not be underestimated—one U.S. House of Representatives Oversight and Accountability Committee report claimed 20.4 to 46.4 billion euros in new compliance and operation costs introduced by the DSA and the GDPR. Still, such estimates should be weighed against the staggering economic harm introduced by software vulnerabilities discussed above.
A best-case scenario would be for policymakers on both sides of the Atlantic, and beyond, to come together and find common ground to encourage the convergence of baseline software security expectations. This process could either be kicked off through a special event, such as a Global Responsible Software Summit modeled after recent ransomware and democracy summits, or be added to an upcoming major gathering.
No nation is an island in cyberspace, no matter how much some wish they were. How leading cyber powers—including the EU and the U.S.—approach the issue of software liability will make worldwide ripples, which, depending on how these interventions are crafted, could turn into a tsunami.
Bolstered by Faith
This article was originally published in City Journal.
The Covid-19 pandemic challenged much more than health-care systems—it also tested communities’ resilience, particularly their ability to handle economic shocks. While we often look to fiscal policy or other government actions to explain localities’ economic outcomes, an overlooked factor plays a significant role: religiosity.
Baylor University professor Byron Johnson and I researched whether communities with higher levels of religiosity before the pandemic fared better economically during the crisis. Comparing data from the Quarterly Census of Employment and Wages between 2019 and 2023 with religiosity levels from the Religion Census in 2010 and 2020, we found that communities where religious adherence had grown over the decade showed notably better employment and business-establishment trends during the pandemic years.
Why would religiosity matter so much in times of economic hardship? Religious communities often act as social safety nets, creating strong networks of moral and practical support. The solidarity this creates can help cushion the blow during economic downturns. Faithful communities often rally in hard times, providing assistance to those in need, from food banks to job networking. Their collective resilience supports not just individual believers but also the broader community infrastructure.
This truth doesn’t just hold for traditionally religious communities. Any community with strong social bonds and a shared sense of purpose, whether based around religious faith or other collective beliefs and practices, can show similar resolve. Weathering a crisis is about more than beliefs—it’s about the networks and mutual support those beliefs foster.
Our study suggests that shared faith and participation contribute significantly to local economies, helping them bounce back faster and stronger. These findings are particularly relevant for policymakers and community leaders. Investing in community-building efforts and supporting faith-based and other local organizations can be an effective strategy to develop economic resilience. This approach can also serve as preparation for managing future crises.
The interplay between religious observance and economic resilience highlights the need for a broader understanding of what strengthens communities and makes them capable of collective response. In a world where economic shocks are inevitable, recognizing and bolstering these networks could be essential to enduring future challenges.
The Silent Office
This article was originally published in City Journal.
The increasing politicization of American life has profound implications not only for social harmony but also for markets and the workplace. Polarization is changing office dynamics, as workers find it increasingly difficult to navigate religious and political issues on the job.
My recent article in the Journal of Economics, Management and Religion presents the results of a new, nationally representative Ipsos survey. The data illustrate a concerning trend: a large share of the American workforce is reticent to express personal views on social and political issues on the job, fearing repercussions that could stymie their career advancement. The survey found that roughly 42 percent of employees have withheld their opinions to protect their professional future, reflecting Americans’ deep-seated fear that self-expression could put their jobs in jeopardy.
That anxiety may be tied to the surging number of instances of religious and political discrimination. Indeed, the Employment and Opportunity Commission reported a 73 percent rise in religious-based discrimination charges between 1992 and 2020. The reality behind that statistic—a potentially substantial increase in unjust discrimination against religious employees—may have contributed to workers’ growing unease.
The effects of discrimination and self-censorship in the workplace bleed into the labor market. Would-be employees are unlikely to want to work for an intolerant employer. Some 40 percent of survey respondents, for example, indicated that they are less likely to apply to a company they perceive as being hostile to their beliefs. Such perceptions can affect productivity, too, making current employees less loyal to their employers and potentially lowering worker engagement. The broader trend of workers being willing to relocate for jobs that more closely align with their moral and political compass—often for significantly lower pay—underscores a desire for personal integrity, and the extent to which many feel that their current workplace stifles this aim.
Widening polarization also affects consumer behavior. Many Americans are willing to change their consumption habits based on brands’ political and social stances. Fifty-six percent of survey respondents said that they are likely to cease purchasing from brands that oppose their values, with 30 percent claiming already to have done so. This indicates buyers’ desire to support companies with similar values and demonstrates the deep connection between political and social identity and consumer loyalty.
While companies may feel pressured to take stands on issues, the data suggest that companies should be wary, or at least aware, that many consumers will switch to competitors in response to political posturing. These results challenge companies to focus on delivering value. Businesses need to find a balance that respects diverse viewpoints without compromising their principles.
Asking Too Much
This article was originally published in City Journal.
In 2023, the Consumer Financial Protection Bureau (CFPB) issued a final rule to implement Section 1071 of the 2010 Dodd–Frank financial regulation law, aimed at fostering transparency and fairness in lending by mandating the collection of information about the race and sexual orientation of small-business loan applicants. Though well-intentioned, the move has sparked a debate on privacy, data security, and operational challenges for financial institutions.
I recently released a working paper fielding a nationally representative survey of 2,996 respondents that documented a pronounced reluctance among business owners to share sensitive personal information with lenders. This hesitancy not only challenges the objectives of Section 1071 but also raises questions about tensions between regulatory goals and individuals’ privacy concerns. My study highlights four key findings.
Reluctance to Share Sensitive Information. A substantial portion of respondents expressed discomfort with sharing personal demographic information with financial institutions, with 65 percent of participants either strongly opposed or somewhat opposed to sharing racial information, and an even higher percentage, 77.1 percent, opposed to sharing their sexual orientation. This resistance underscores a broader concern about privacy and data.
Demographic Variations in Comfort Levels. Older respondents and those with some college education were more likely to express sensitivity about sharing personal information. Interestingly, while majorities across all demographic groups were uncomfortable with revealing such information, males, married individuals, and those identifying as black, Asian, or conservative were comparatively less concerned: 57 percent of black and 55 percent of Asian respondents, compared with 68 percent of Hispanics, reported being uncomfortable divulging racial information.
Impact on Banking Preferences. Individuals hesitant to share their race or sexual orientation were 5–7 percentage points less likely to approve of banks having additional objectives such as promoting environmental sustainability or targeting specific groups for lending. This suggests a link between privacy concerns and a preference for banks to focus on traditional banking concerns.
Effects of Information Treatments. Since many consumers are unaware of how firms and third parties use their data, I also experimented with an information experiment in which respondents were shown prompts about the effects of data breaches. Respondents who saw a prompt about the 2021 leak of user data from the online trading platform Robinhood were 5 percentage points more likely to prefer not to share racial information with banks. To put that in perspective, the proportion of people who do not want to share racial information is already 65 percent, meaning that further prompting of people for information makes them even less willing to share.
The findings underscore a critical dichotomy: while Section 1071 of Dodd-Frank seeks to illuminate and address disparities in access to credit, it inadvertently stokes fears of data misuse and breaches among borrowers. This tension is not without consequence. Financial institutions, tasked with implementing the CFPB’s new rule, face the dual challenge of complying with regulatory requirements and addressing borrowers’ fears. The study highlights the need for a nuanced approach to data collection–one that respects borrower privacy, while striving for the transparency and fairness that Section 1071 aims to achieve. My research calls for reevaluating how regulatory objectives are pursued and suggests the need for a framework that recognizes the legitimate concerns of borrowers, the very people whom the regulation was established to help.
Housing market data suggests the most optimistic buyers during the pandemic are more likely to stop paying their mortgages
This article was originally published in Fortune Magazine with William Larson.
Traditional methods for forecasting housing prices and broader economic indicators are proving insufficient. In our recent research, we explored an overlooked aspect of home buying: the significance of buyers’ expectations. We found that the anticipations of mortgage borrowers regarding future housing prices are crucial for understanding the health of the economy.
There’s a consensus that the expectations about future increases in housing prices and interest rates significantly influence housing market dynamics. The logic is straightforward: If individuals believe the value of homes will rise, they are more inclined to take on more debt. This effect is amplified in the housing market because you cannot bet against market downturns, making the positive outlooks of buyers more influential. Previous studies have indicated that this optimism can drive rapid increases in housing prices, creating “bubbles.” These bubbles often lead to inflated house prices, fueled by speculation.
What occurs, however, when housing prices remain elevated but expectations begin to decline?
Our findings indicate that expectations are critical in the decision-making processes of mortgage borrowers. During the COVID-19 pandemic, there was a period when confidence in future housing price increases waned, despite actual prices still rising.
We observed that borrowers who were initially the most optimistic about price increases were significantly more likely to request mortgage forbearance–a pause or reduction in payments–by about 50% more than the broader mortgage-borrowing population (6% versus 4% in our study) during this episode. This underscores the significant impact of borrower expectations on the housing market and economic stability.
Expectations trump reality
We began our research with data from the Federal Housing Finance Agency, specifically the National Mortgage Database, and noticed something intriguing: Before 2020, people who were positive about the future increase in house prices were more likely to pause their mortgage payments early in the COVID-19 pandemic, despite the fact that house prices were still going up. This observation led us to understand that these borrowers were reacting more to their expectations about the future than to the actual market conditions at the time. When their outlook on house prices temporarily worsened, they opted for forbearance. However, as their optimism returned towards the end of 2020 and throughout the pandemic, these same borrowers began resuming their mortgage payments.
This pattern underscores how crucial expectations are in shaping how borrowers act, which, in turn, has significant effects on the broader economy. After our study period, which ended in 2022, expectations dropped substantially heading into 2023. Our findings suggest that the wave of optimistic borrowers between 2021 and mid-2022 may be particularly vulnerable to such drops in expectations if paired with negative equity or job loss. Thankfully for the mortgage market, the economy–and house prices–remained strong throughout this most recent episode of falling expectations.
Our research serves as a warning to those involved in housing policy and finance: It's essential to consider what borrowers are thinking and expecting, not just the usual financial indicators like interest rates, monthly payments, or how much debt they're taking on compared to the value of their home.
Understanding people's expectations is tricky–they're hard to measure and introduce a challenge known as adverse selection, where borrowers have more information about their ability to pay back loans than the lenders or investors do. Discovering that something not typically tracked by mortgage investors, like borrower expectations, can have a big impact on whether loans are paid as agreed is striking and warrants more attention.
For those regulating and monitoring the housing market, grasping the relationship between what people expect and what's actually happening can lead to better forecasts and smarter policymaking.
Christos A. Makridis, Ph.D., is an associate research professor at Arizona State University, the University of Nicosia, and the founder and CEO of Dainamic Banking.
William D. Larson, Ph.D., is a senior researcher in the U.S. Treasury’s Office of Financial Research, and a non-resident fellow at the George Washington University’s Center for Economic Research. This research was conducted while Larson was a senior economist at the Federal Housing Finance Agency (FHFA). The views presented here are those of the authors alone and not of the U.S. Treasury, FHFA, or the U.S. Government.
Why Solana will prevail despite Ethereum ETFs
This article was originally published on Cointelegraph (with Connor O’Shea).
The cryptocurrency world is abuzz with bullish sentiment thanks to Bitcoin spot ETFs. Investors have been quick to accept that Ether spot ETFs will follow in the months ahead.
Many investors have begun to speculate that many altcoins will also have ETFs, which has led to price appreciation and a lot of speculation. But all the enthusiasm has led many to overlook an obvious contender to Ethereum — Solana, which has beat many expectations and continues to boast a sophisticated tech team.
There has been no shortage of stories talking about Solana and its links to FTX founder Sam Bankman-Fried, with many predicting its demise. However, Solana has weathered the storm according to a handful of metrics. For instance, the number of active addresses on the network has nearly reached its 2022 level. The number of new addresses has continued to grow at almost as fast a rate as it was in 2022 as well. In fact, the number of unique active wallets (UAW) is up from 2022.
And that’s not to mention the reality that active addresses can be manipulated. An alternative metric, namely capital efficiency (i.e., decentralized exchange volume per dollar of total value locked), suggests Solana has outpaced Ethereum in recent months.
To be sure, Solana isn’t operating perfectly, but it has clearly defied the expectations of many who thought it might collapse following the FTX blow-up. A large part of its recovery after FTX, and growth since then, has been driven by seemingly wise leadership, which in turn affects their technological investments, strategy, and ultimately community engagement.
The blockchain technology landscape, particularly at the layer-1 (L1) level, currently falls short of the transformative financial future many envisioned. While the initial promise of blockchain offered a vision of faster, cheaper, more efficient, and censorship-resistant financial systems, the reality today presents significant challenges.
The L1 landscape is characterized by fragmentation of liquidity across a range of layer-2 solutions (L2s) and an absence of scalability that hampers efficiency and user experience in the decentralized exchange space, coupled with concerns about the degree of centralization in the centralized exchange space. This fragmentation has led to a piecemeal ecosystem where the seamless integration and interoperability necessary for a truly revolutionary financial system remain elusive. As a result, the blockchain community finds itself at a crossroads, seeking solutions that can fulfill the early promises of this technology.
Efforts to scale blockchain technology today are diverse, with each project taking a unique approach to overcome limitations in speed, efficiency, and interoperability. The Ethereum blockchain, for example, is pursuing a multi-layer strategy, incorporating both layer-2 scaling solutions and sharding to increase transaction throughput without sacrificing security.
Meanwhile, projects like Cosmos and Polkadot are exploring a multi-chain architecture that allows for specialized blockchains to communicate and transact seamlessly. Solana, along with newer entrants like Sui and Aptos, proposes an alternative approach, focusing on high throughput and efficiency at the layer-1 level itself. Each of these approaches represents a different path towards achieving scalability, with their own set of trade-offs between decentralization, security, and performance. The variety of solutions underlines the complexity of the scalability challenge and the blockchain community's commitment to finding a way forward.
Solana stands out for its unique approach to addressing the core issues plaguing the blockchain ecosystem and its robust community support — evidenced by its resilience post-FTX and the success of its global hackathons, which underscore the platform's strong foundation. And significant UX improvements, notably with mobile integration through Saga phones and competitive platforms like Jupiter — which rivals Uniswap — make Solana highly accessible.
Solana has also demonstrated its ability to handle finance at scale, offering 400ms block times for finality compared to Ethereum's longer durations, along with initiatives like Firedancer and local fee markets, further exemplify Solana's technological edge. The platform's emphasis on seamless transactions without the need for bridging or dealing with fractured liquidity, coupled with its application in real-world solutions like decentralized physical infrastructure (DePIN), positions Solana as a leader in the blockchain space.
That’s not to say that Solana is guaranteed to surpass Ethereum, or even Bitcoin, but it does mean that Solana is no longer an underdog. And perhaps before any altcoin gets a spot ETF, Solana will have one of its own that will bring greater competition to the blockchain space.
How much longer can indebted Americans keep buying crypto?
This article was originally published on Cointelegraph.
Despite many seemingly positive reports about retail spending or the unemployment rate in the United States, the nation continues to battle several structural challenges that have only grown more severe, with a historic $34 trillion in public debt and a similar high of $1.13 trillion in consumer credit card debt. Alexander Hamilton is famous for remarking that the "national debt, if it is not excessive, will be to us a national blessing," but the scale of current debt raises questions about the sustainability of fiscal policies and their long-term economic impact.
Concerns about the public debt used to be more of a fringe topic that conservatives and libertarians argued about. However, recent remarks by leading figures in the banking sector underscore the gravity of the situation. JPMorgan Chase CEO Jamie Dimon's warning of a global market "rebellion," Bank of America CEO Brian Moynihan's call for decisive action, “The Black Swan” author Nassim Taleb's "death spiral" prognosis, and former House Speaker Paul Ryan's description of the debt crisis as "the most predictable crisis we’ve ever had" highlight the urgent need for a reassessment of the United States' fiscal trajectory.
The public's growing anxiety over government debt, with 57% of Americans surveyed by the Pew Research Center advocating for its reduction, reflects a shift in societal priorities towards fiscal responsibility. This concern gains further significance in light of its real-world implications, notably on housing affordability and the broader economic landscape. The precarious state of the housing market, exacerbated by rising interest rates, epitomizes the link between fiscal policy and individual economic prospects: as public debt grows, so too do interest rates.
The public's growing anxiety over government debt — with 57% of Americans surveyed by the Pew Research Center advocating for its reduction — reflects a shift in societal priorities towards fiscal responsibility. This concern gains further significance in light of its real-world implications, notably on housing affordability and the broader economic landscape. The precarious state of the housing market, exacerbated by rising interest rates, epitomizes the link between fiscal policy and individual economic prospects: as public debt grows, so too do interest rates.
The global standing of the U.S. dollar, serving as a "convenience yield," plays a pivotal role in the country's ability to manage its substantial debt without immediate negative consequences. However, a recent working paper released through the National Bureau of Economic Research finds that the loss of the dollar's status could amplify the debt burden by as much as 30%. This revelation underscores the imperative to critically evaluate the nation's fiscal direction.
The challenge in the nation — and many other developed countries — reflects what is going on for many consumers. Americans have increasingly turned to their credit cards without paying down the balance to cover regular expenses. A new report released through the New York Federal Reserve, for instance, shows that total credit card debt increased by $50 billion (or 4.6%) to $1.13 trillion from the previous quarter, according to the report, marking the highest level on record in Fed data dating back to 2003 and the ninth consecutive annual increase.
The New York Fed report also shows an uptick in borrowers who are struggling with credit card, student and auto loan payments. For example, 3.1% of outstanding debt was in some stage of delinquency in December — up from the 3% recorded the previous quarter, although it was still down from the average 4.7% rate seen before the Covi-19 pandemic began.
"Credit card and auto loan transitions into delinquency are still rising above pre-pandemic levels," said Wilbert van der Klaauw, economic research advisor at the New York Fed. "This signals increased financial stress, especially among younger and lower-income households."
An important strategy for retail investors during periods of uncertainty is to diversify. But how you diversify matters. Investing in the S&P 500 is good, but if all your savings are locked up in the S&P 500 and it plummets, then you’re in trouble. While it is true that, even if a plunge took place in the next year, the S&P 500 will rebound, but you still have to weather the storm.
An additional strategy is to have some exposure to crypto. Many people focus on Bitcoin, Ethereum, and other digital currencies.But at least equally important — if not more — for long-run value creation in the digital assets market is hash rate, which reflects how much activity is taking place on a blockchain. Bitcoin, for instance, has had a sustained increase in the hash rate alongside its price appreciation.
The upcoming year is an important one with substantial macroeconomic risks for both the nation and the consumer. Although some economic reports have been positive, we need to pay attention to the fundamentals and whether the data reflects transitory versus permanent shocks. The challenge for policymakers is to craft fiscal policies that foster sustainable growth and productivity, steering clear of scenarios where short-term fiscal expediencies precipitate long-term economic liabilities. The current path, however, mirrors the predicament of a borrower trapped in a cycle of debt, with interest rates surpassing their monthly income.
Let’s help make 2024 a transformational year for the better!
Understanding patterns of cyber conflict coverage in media
This article was originally published on Binding Hook (with Lennart Maschmeyer and Max Smeets).
In February 2014, the cyber threat intelligence community was stirred by the discovery of ‘The Mask‘, a highly advanced hacking group thought to be backed by a national government. This group had been targeting a range of entities, including government agencies and energy companies. Kaspersky Lab, a Russian cyber security company, described their activity as the world’s most advanced APT (Advanced Persistent Threat) campaign. However, despite the sophistication of The Mask, which had been active since at least 2007, its media coverage was surprisingly limited, failing to make significant headlines.
Fast forward to 2018, when Kaspersky Lab reported on Olympic Destroyer, the cyber attack that disrupted the 2018 Olympics, paralyzing IT systems and causing widespread disruption. This incident garnered immediate and extensive media coverage, with over 2000 news stories published, showcasing a stark contrast in the media’s approach to reporting cyber operations.
These two cases highlight a critical and intriguing question: Why do some cyber operations receive extensive media attention while others do not? It is important because media reporting shapes how the public and policymakers perceive the cyber threat landscape.
Yet, there has been a surprising lack of analytical research addressing why some cyber operations attract more media attention than others. Until now, our understanding has largely been shaped by anecdotal evidence rather than systematic analysis.
Our recently published academic article in the Journal of Peace Research begins to tackle this question by introducing a comprehensive collection of cyber operations reports derived from commercial threat intelligence providers, which are often the primary sources for journalists. Using multivariate regression, we identify the characteristics that correlate with the extent of media reporting on cyber operations.
Four tests
First, we explored the intensity of effects produced by cyber operations. Historically, violent and shocking news stories have garnered more attention, encapsulated in the adage, ‘if it bleeds, it leads.’ We hypothesized that the more intense and threatening the effects of a cyber operation, the greater the media coverage it would receive. Our findings revealed that disruptive and destructive cyber operations generate more news stories than their espionage counterparts. However, while cyber effect operations receive more coverage than espionage, this result is not statistically significant.
Next, we examined the type of target involved in cyber operations. Previous assumptions paralleled the media coverage of cyber operations with terrorism, where attacks on more politically or symbolically significant targets garner more attention. However, our research indicates a different pattern. We found that operations targeting the military or financial sectors actually generate less media coverage.
The third aspect we considered is the perceived sophistication of cyber operations. The media often gravitate toward stories that are easily understandable and remarkable. In this context, we expected cyber operations employing zero-day exploits, an easily observable indicator of sophistication, to receive more coverage. Our research supports this expectation, showing a significant increase in media stories for cyber operations that use these advanced techniques.
Lastly, we investigated the origin of the threat. Previous studies in communications have highlighted a media tendency toward bias against those outside the audience’s primary demographic, often leading to an exaggerated portrayal of non-white individuals in terrorism-related news.
Extending this insight to the realm of cyber threats, we anticipated a similar pattern, with adversarial threats groups being overrepresented in media narratives. This presumption aligns with past research, which observed that operations attributed to Russia, China, Iran, and North Korea tend to receive more attention.
However, our research does not find a significant correlation between media coverage and cyber operations attributed to key adversaries of Western powers, such as Russia, China, Iran, and North Korea.
Double bias
Our findings reveal a ‘double bias’ in media reporting on cyber operations. This bias originates from the reporting practices of commercial threat intelligence firms, further skewed by media outlets’ preference for stories that resonate with their audiences. This layered selectivity results in a narrow and potentially distorted portrayal of cyber threats, influencing academic discourse and policy-making.
There is a fascinating trend to watch regarding the double bias. Traditionally, mostly Western cyber threat intelligence firms have publicly disclosed details on APTs. Kaspersky Lab, based in Russia, stands out as an exception. The company has also published on various Western covert cyber operations that haven’t been widely reported elsewhere. However, lately, Chinese cybersecurity companies have begun to publicly attribute cyber threat actors as well. If this trend continues, it will be intriguing to observe how the media reacts to these reports and how much they are taken as credible compared to reports from Western intelligence companies.
Unpacking the Myths of Employee Ownership
This article was originally published in Inc Magazine with Bill Fotsch.
Last year, Pete Stavros, a senior partner at KKR and the founder of Ownership Works, published an article in Fortune championing shared company ownership as the "missing path to the American dream." And for good reason--an increasing share of Americans believe the American dream has deteriorated, with only 19 percent reporting confidence that their children's generation will be better off than their own, according to a recent NBC poll. Pete's proposal received support not just from the labor advocates but also from the investment community, including major financial institutions.
We, too, find common ground with Stavros, particularly in improving business results and the lives of the employees that drive those results. But we diverge when it comes to the implementation of employee ownership. The difference, as they say, is in the details--arguably, ones that make or break the success of employee ownership.
Recognizing the many benefits of employee ownership, our perspective emphasizes that it does not automatically produce the desired outcomes on its own. To rise to the American dream level, ownership must be earned, not simply given. In other words, ownership must be realized by gains in productivity and value-added; it cannot sustainably be given out in perpetuity.
That raises a chicken-or-the-egg question. Let's go back to Corey Rosen's 1987 Harvard Business Review research, which revealed that ESOP (employee stock ownership plan) companies with participation plans grew three to four times faster than those without. The key word here is "participation." Rosen, an otherwise staunch supporter of employee ownership, did not shy away from revealing this detail. For the ESOP to thrive, employees must be involved in the plan and earn the reward.
That was over three decades ago; has the narrative changed?
Take, for instance, the Harvard Business School Working Knowledge article discussing how KKR's ownership model dramatically changed worker behavior and company success. It's a compelling narrative, but it may tempt readers toward an overly utopian view of ESOPs. An employee will not necessarily behave like an owner simply because they are given equity, no more than a pre-med student will behave like a doctor if given a unearned degree. In contrast, ownership is the fruit of stewardship and investment.
Recent conversations with Gil Hantzsch, President of MSA, reminded us that giving employees ownership changes a company's form more readily than its function. MSA is a thriving ESOP company, yet when they attempted to pool resources and share best practices across their various branches, the ownership model did not automatically encourage collaboration, trust, or shared action. It wasn't until MSA introduced structured interactions--a series of 'flocking events', where subject-matter experts met in person to get to know one another, build trust and share insights--that their best-practices initiative gained traction. Here, the company equity had been in place for years, but was not wholly sufficient to affect behavior.
If it's clear that a company would do better if its employees began to think and act like owners, and ownership at face-value does not transform employees, then what does?
The genesis of a successful ESOP doesn't begin with the ESOP itself. It starts with cultivating a culture of ownership among employees, treating them as true partners in the mission to deliver value to customers and ensure sustainable profitability. Many ESOP successes lead back to this fundamental approach; companies such as MSA Engineering, Trinity Products, Springfield Remanufacturing, and Dorian Drake. And there is no shortage of successful companies that never had employee ownership as part of their arsenal; Southwest Airlines had profit sharing long before it had any employee equity program.
When employees can see and understand the economics of the business, they learn how their day-to-day behaviors influence the bottom line. Then can innovate and contribute. When employees are actively in conversation with the customer, they have insight into what drives the business' value. As confidence builds, employees develop an eye toward long-term strategy. It's algebra, then calculus. This scaffolding ensures employees are prepared for the responsibility of ownership and can make the most of it.
Our five years of research on this management approach, called Economic Engagement, contains 8 waves of 50-150 companies per wave and is published in Inc. ("A Key Strategy to Double Your Profitable Growth"). It includes fifteen questions aimed at understanding drivers behind employee behavior and company success, one of which is employee ownership. While employee ownership is part of the equation, the existing body of research does not single it out as the ultimate driver of performance or employee well-being. It's the combination that produces superior results:
Customer engagement is the starting point since customers define value and thus the economics of any business. Insure that all employees have a window on what customers value on an ongoing basis, since customers change over time.
Economic understanding aligns all employees in a common understanding of what defines success for the company that evolves from customer engagement and the value they are adding.
Economic transparency enables all employees to see how the company is doing and learn from successes and failures.
Economic compensation gives all employees a shared stake in the results, making them economic partners in the company. This ranges from wages, incentive compensation and long-term equity.
Employee participation leads to lower turnover and better relationships between owners/managers and employees, by encouraging employees to actively participate in the business, often beyond their defined role.
At economically engaged companies, employees are immersed in the operational economics that power profitability--metrics like product shipments, monthly job margin dollars, and the acquisition of new customers. Employees learn to track and forecast these key numbers on a weekly basis. They're empowered to steer these numbers in a positive direction, while also reaping the rewards of enhanced performance. They're likely to forge long-lasting and fruitful careers, as well as source quality referrals. The environment elevates the participation of the employee to a level that transcends transactional ownership.
Employee ownership is good, but by itself, it has less impact on employee behavior. It's hard for employees to feel motivated by a potential benefit of an indeterminant amount, at some point in the distant future. It's hard enough to get employees to participate in 401K matching programs. Shared ownership is not a panacea; it's a tool. So, let's agree that we should improve business results and the lives of the employees that drive those results--by learning from each other, from employees, and from research.
We studied 235 stocks–and found that ESG metrics don’t just make a portfolio less profitable, but also less likely to achieve its stated ESG aims
This article was originally published in Fortune.
Institutions have become increasingly skeptical about ESG ratings–and rightly so. In our recent research, we show how the inclusion of ESG metrics in assembling a portfolio can lead to unintended consequences.
After gathering the subset of stocks that were traded on a daily basis between 1998 and 2020 on the three major exchanges as well as ESG data, we quantitatively studied the inclusion of ESG metrics in two ways. First, we consider trading strategies that only rely on returns, rather than a combination of returns and ESG scores. We found that non-ESG rules that incorporate returns result in higher ESG scores, compared with ESG-based rules.
Second, we considered trading strategies that prioritize the stocks with the highest overall ESG score, reflecting the increased attention that ESG has received in recent years. We found that it does not result in the most efficient portfolio in terms of risk-adjusted returns. While including ESG data leads to portfolios with higher returns, it was at the cost of more volatility.
Our results may come as a surprise: Because of the noise inherent in ESG metrics, including them creates estimation risk and worsens the portfolio allocation. In fact, we find that the explicit targeting of ESG metrics leads to a portfolio allocation that is economically and environmentally worse than the market allocation. That is consistent with prior research that finds substantial disagreement among ESG ratings agencies due to their chosen ESG metrics, how they measure the metrics, and how they weight across the metrics in forming overall scores. Our results are also consistent with recent research that has shown how the inclusion of uncertainty associated with an ESG metric lowers financial returns.
It’s as if you are trying to hit a moving target–you will not only miss the target but also create a mess in the process. Even though the desire to achieve broader impact through ESG is good, the devil is in the details: the measurement and choice of metrics are enormously important, and the absence of clarity and consensus around them will introduce significant noise into investors’ portfolio choice conundrum.
To make further sense of these results and understand how the average American thinks about ESG matters, we surveyed a nationally representative sample of 1,500 people and asked them to rank 10 ESG topics. While we can only speak to the relative ranking of each topic, we find no statistical evidence that individuals believe companies should focus on other priorities besides maximizing shareholder value after accounting for their own ranking of ESG issues.
Furthermore, among those who personally rank issues such as climate change among the greatest priorities, they also recognize that it is not necessarily within a company’s objectives to do so. If anything, respondents tend to rank company objectives around paying a living wage higher than their own personal rankings of it. In this sense, whereas a frequent justification for active ESG policies is that people believe that companies should be doing more, our result says that it is just a reflection of peoples’ own preferences that they superimpose onto the company.
We also conducted a simple randomized experiment where we provided some respondents with information from a scientific study about the costs of renewable energy, in contrast to the control group, to gauge the impact of information on attitudes toward ESG. Then, we asked them about their support for renewable policies. We found that information exposure lowered their support, after learning about what often amounts to overlooked costs. This divergence between personal and organizational ESG objectives, combined with the muddled ESG scoring landscape, reiterates the potential pitfalls of heavily relying on these scores for investment decisions.
An essential takeaway is the need for a balanced approach. While ESG metrics can provide valuable insights into a company’s broader societal impact, they should be seen as a supplement, not a replacement, to traditional financial metrics. Investors should be wary of overemphasizing ESG at the expense of established measures that have stood the test of time.
Christos A. Makridis, Ph.D., is the founder and CEO of Dainamic Banking and holds academic affiliations at Stanford University, among other institutions.
Majeed Simaan, Ph.D., is a professor of finance and financial engineering at the School of Business at Stevens Institute of Technology.
Researchers reveal the hidden peril of ‘labeling’ employees
This article was originally published on Fast Company.
In today’s hyper-competitive business landscape, the quest to quantify and categorize employee performance is more aggressive than ever. Consulting giants like McKinsey offer provocative frameworks that promise to neatly sort your workforce into boxes, “amplifying the impact of star performers” by identifying six distinct employee groups, or archetypes.
Such categorizations echo the controversial strategies of yesteryears, notably Jack Welch’s “Rank and Yank” policy. Remember how that worked out? Welch’s system had its moment in the sun, but it eventually fell from grace, proving to be a divisive and morale-crushing strategy.
Before you sign that consulting agreement and begin using their employee filtration tools, it’s worth pausing to consider the powerful psychological implications of labeling. We need to talk about the Pygmalion Effect—a concept that suggests these labels could be doing more harm than good.
The Pygmalion Effect refers to the concept that the labels we attach to people can influence their behavior in ways that confirm these labels. Imagine you label someone as a “disruptor.” Over time, not only will they start acting the part, but their managers and colleagues will treat them as such, reinforcing the behavior. In other words, the label becomes a self-fulfilling prophecy, locking individuals into roles that may not reflect their potential or future performance.
For instance, a sales manager who labels a team member as “low potential” might unconsciously offer fewer growth opportunities, affecting the employee’s performance and motivation to step up. Or consider how many talented employees might be pigeonholed into roles that don’t fully exploit their skills, simply because of a label slapped onto them during a performance review.
Here’s the kicker: Employees aren’t static entities. Their performance and engagement levels can change, often dramatically, in response to various factors like work environment and personal circumstances. Management practices alone are shown to affect productivity by around 20%. We’ve seen firsthand how an employee branded as a “value destroyer” turned into a key asset when engaged and motivated properly. To think that an employee’s worth can be permanently categorized is to misunderstand the dynamic nature of human capital.
We have seen how eschewing labels propels results for hundreds of consulting clients, including:
A U.K. manufacturer’s owner had labeled the head of their model shop a troublemaker, or “value destroyer” in McKinsey terms. Ignoring this, the owner solicited his input on how to improve the business. The model shop head generated profitable ideas, leading to increased earnings. He emerged a leader, or “thriving star” in McKinsey terms.
A Kentucky landscape company viewed its employees as hired hands, or “mildly disengaged” in McKinsey terms. Treating them like trusted partners, with a shared focus and an incentive to increase job margin per month, drastically improved productivity and profits, as well as innovation. One truck driver, running a snowplow, generated a new client on his own by plowing an unplowed parish parking lot, asking only that the pastor take a call from his company sales team. No one told him to do this. With focus and incentive, he transformed from disengaged to “reliable and committed.”
An urgent care business had come to assume they were stuck with debilitating turnover; “quitters,” McKinsey might suggest. But after examining exit interviews and addressing the common issues (particularly lack of management listening and acting on provider input), the “quitters” stopped quitting. Patient NPS scores soared, as did profits.
The real cost you pay when working with imprudent consultants isn’t their expensive fees, it’s the potential stifling of employee growth and innovation. When you label people, you’re not just putting them in boxes; you’re putting a ceiling on what they can achieve. And in today’s fast-paced business world, that’s a luxury no company can afford. It turns out, employees can and do change over time, something you can either enhance or stifle.
To be sure, there are some employees who are simply poor performers and not right for the job even when you work with them to explore a change in responsibilities.
Of course, one solution is to screen employees better. Some of our prior research, for example, has found that employees who demonstrate greater intellectual tenacity tend to perform much better than their counterparts, and their advantage in the labor market has grown over time as work has become more complex. One way to think about this result is that persistence and curiosity in the workplace are quintessential characteristics for not only problem solving, but also interpersonal dynamics. But hindsight is always 20-20 and the wrong candidates might pass through the screening.
Instead of spending resources on categorizing employees, why not invest in creating an environment that promotes positive behavior change? By focusing on behaviors rather than labels, companies become more growth-oriented and attentive to what can change.
This fosters a culture where employees are empowered to evolve and adapt, driving not just individual success but also organizational excellence. Our multiple waves of survey research—on what we broadly refer to as Economic Engagement—shows that when companies partner with employees to serve their customers profitably, behavior changes, and that in turn leads to greater profitability.
Economic Engagement isn’t your run-of-the-mill, feel-good company culture. Instead, it’s a well-structured, results-driven management system underpinned by transparency, a deep understanding of economics, and active employee involvement.
Employees aren’t just taught how to read a balance sheet. They’re immersed in the operational economics that power profitability, metrics like product shipments, monthly job margin dollars, and the acquisition of new customers. Employees learn to track and forecast these key numbers on a weekly basis. They’re empowered to steer these numbers in a positive direction, while also reaping the rewards of enhanced performance.
Employees at an economically engaged company are likely to forge a long-lasting and fruitful career there, as well as a source of quality referrals. The environment elevates the participation of the employee to a level that transcends categorization and engenders true engagement.
Before you take any steps to classify your workforce, consider other avenues for understanding and unlocking their potential. Sometimes, the smartest decision is to sidestep the labels and focus on cultivating a culture that brings out the best in everyone.
Christos A. Makridis is the CEO and founder of Dainamic, a financial technology startup, and a research affiliate at Stanford University. He holds doctorates in economics and management science and engineering from Stanford University. Follow @hellodainamic.
Bill Fotsch is a business coach, investor, researcher, and writer. He holds an engineering degree and an MBA from Harvard Business School and is the founder of Economic Engagement.
The FDIC’s 2023 Risk Review shows the surprising resilience of community banks despite inflation and shifting interest rates
This article was originally published in Fortune.
The Federal Deposit Insurance Corporation (FDIC) recently released its Risk Review for 2023, detailing a substantial increase in unrealized losses–$617.8 billion in the last quarter of 2022 and $515.5 billion in the first quarter of 2023–driven in large part by “declines in medium- and long-term market interest rates.” If banks face a situation where they need liquidity and therefore have to sell investments at a loss, the depreciation of their portfolios could be a deathblow that renders many financial institutions insolvent.
While several asset quality indicators, such as the delinquency rate and nonconcurrent loan rate, remained favorable, the reality is that the banking sector is not in good shape–and declining macroeconomic conditions have exacerbated risk factors. Rising interest rates, coupled with inflation, have simultaneously affected bank balance sheets and consumer debt and expenditures. However, there is an important silver lining in the report: Community banks have fared much better than their larger counterparts–and helped sustain small business lending.
One important metric for gauging the health of a bank is its net interest margin (NIM), which reflects the interest income generated from credit products, such as loans and mortgages, net of outgoing interest payments to holders of savings accounts and certificates of deposits. Although NIMs increased in the industry as a whole in 2022, the increase was concentrated among community banks at 3.45%–up from roughly 3.25% in 2021. Since 2012, community banks have had roughly 0.5 percentage points higher NIMs than their non-community counterparts.
In addition, community banks have played a major role in supporting small business lending. Even though they only hold 14.7% of total industry loans, they accounted for 23.6% of total small business loans in 2022. Moreover, the increase in lending did not come with additional costs of higher risk: The commercial and industrial early-stage, past-due rate, and nonconcurrent rate for community banks actually declined at the onset of the pandemic and has remained low at around 0.5%, compared with roughly the double among non-community banks.
To better understand the health of the banking sector at a higher frequency, I launched a monthly nationally representative banking survey of 1,500 respondents in June. Consistent with these results from the FDIC Risk Review report, I found that individuals who borrow from smaller banks are much more confident in the safety of their deposits. For example, 34.5% of respondents who work with a small bank report that their bank is “rock solid” with “no concerns,” whereas only 26% of those with a medium-sized bank report similarly (and 33% among large banks). There is a growing recognition that small banks are better positioned to maintain the trust and loyalty of their borrowers because their interactions with customers are more frequent and their investments more prudent, particularly in their local communities.
One potential concern with these results is that differences in perception of risk simply reflect differences in the type of borrower. However, all results are robust to controlling for a wide array of demographic factors (age, race, education, marital status, employment status) and the respondents’ overall perception of the banking sector (not their own bank). Furthermore, those who work with a small bank are less optimistic about interest rates and central bank policy overall, so if anything, these results are overly conservative.
As 2023 comes to a close, it is important to remember the important role that small banks play in providing liquidity in the banking system–often with the least amount of risk. Due to their exposure to varying macroeconomic conditions, larger and mid-sized banks will need to pay greater attention to the quality of their assets and the health of their balance sheets.
Christos A. Makridis is the founder and CEO of Dainamic, a financial technology startup that empowers banks with regulatory compliance and forecasting software, in addition to serving as a research affiliate at several leading universities. Christos holds dual doctorates in economics and management science & engineering from Stanford.
A Missing Link for Improving Education
This article was originally published in City Journal.
Republican presidential candidate Vivek Ramaswamy has said that “the nuclear family is the best form of governance known to mankind.” That notion has its critics, but it is increasingly shared by many across the political spectrum. Two recent books, for instance— Robert Cherry’s The State of the Black Family: Sixty Years of Tragedies and Failures and New Initiatives Offering Hope and Melissa Kearny’s The Two Parent Privilege: How Americans Stopped Getting Married and Started Falling Behind—present evidence that family dynamics influence a child’s life chances more than any other factor, including formal education. Unfortunately, state-level educational assessments and the National Assessment of Education Progress (NAEP) include no student information on family structure (for example, whether a student lives in a two-parent, single-parent, guardian, or foster household), making it harder to pursue data-driven educational interventions.
In our book, The Economics of Equity, we discuss state-level policy interventions to involve parents more effectively in their children’s education, to implement initiatives such as after-school programming targeted toward students most in need, and to help capable students of low socioeconomic status. These recommendations are especially important, given what we know about how students of low socioeconomic status spend substantially less time on educational activities outside of school. We cannot keep throwing more money at this problem; we have to address the root issue, which starts with the family. If schools could access data on family dynamics, they could craft more realistic parent-teacher-student-school responsibility agreements and create tiered intervention systems that take family capability and needs into account.
The good news is that this has been done before. For instance, one of us has written about how a school serving students from low-income families achieved Blue Ribbon status through the leadership of its principal. The principal’s key intervention was an afterschool program focusing on students with unstructured home environments. The principal was only able to make these determinations, however, thanks to teacher recommendations, not from an official database that tracked these kids’ home status.
Interventions based on students’ gender, race, class, learning disabilities, or English proficiency alone have led to many ineffective initiatives. Each of these characteristics is correlated with achievement gaps but is not their driving factor. In some cases, as with California’s new math requirements, officials are promoting initiatives that not only cost taxpayers dearly but also risk worsening achievement gaps.
Our book also summarizes the empirical evidence on charter and private schools, which have historically been better suited to parental involvement. Since parents must choose these schools, the schools can require a certain level of accountability from them. They can also create and adapt systems that meet parents’ needs without having to pass through the layers of bureaucracy and union battles common in public schools.
Unfortunately, the student data currently available to public school educators don’t help them address problems stemming from family status. Providing schools with data on family structure would give them a vital tool for addressing academic achievement gaps and improving educational outcomes.
Goldy Brown III is an associate professor at Whitworth University’s school of education, director of the university’s Educational Administration Program and a former school principal. Christos A. Makridis is a research affiliate at Stanford University, CEO/founder at Dainamic, and holds appointments in other institutions.
The potential of AI systems to improve teacher effectiveness in Spokane public schools
This article was originally published in the Spokesman-Review (with Goldy Brown).
The United States K-12 education system has faced challenges for years, but has faced even greater headwinds recently following the pervasiveness of school closures and the resulting effects on student mental health and learning outcomes. Student test scores in math and reading fell to their lowest levels in 2022, according to the National Assessment of Educational Progress. These deteriorating outcomes warrant effective instruction from teachers across classrooms.
This year, Spokane Public Schools announced that it is pioneering a novel approach to evaluating and improving teacher effectiveness using AI systems. While AI is sometimes thought of as displacing jobs, it can also augment our productivity and learning. And as this school district in Spokane is exploring, AI systems can potentially help lower-performing teachers improve their quality of instruction at scale and embed greater consistency into teaching nationwide.
School districts often struggle with limited resources to provide continuous, quality training for their teachers, and bureaucratic impediments make removing ineffective teachers an arduous process. As a result, many students suffer under the instruction of teachers who, despite their best intentions, are ill-equipped to meet their educational needs. A large body of empirical research, in part led up by professor Eric Hanushek at the Hoover Institution, has pointed out that teacher quality is the single greatest impediment to learning outcomes.
The recent advances in large language models, such as Bard and ChatGPT, highlight the ways that AI can improve training and assessment of teachers at scale without having to involve principals and other training professionals for each individualized case. In particular, AI-powered platforms can provide a personalized, data-driven approach to teacher training.
By analyzing classroom data and building statistical models that predict learning outcomes as a function of teacher characteristics and inputs, these systems can offer real-time feedback and guidance, addressing teachers’ specific areas of weakness and offering them ways to improve. For example, if a teacher consistently struggles with engaging students or explaining complex topics, the AI could provide tailored strategies and methods to improve in these areas.
Moreover, AI-based coaching systems offer scalability and efficiency that traditional teacher training programs cannot match. Such a system can serve numerous teachers simultaneously, providing continuous support and learning opportunities. This continuous feedback loop would allow teachers to refine their skills constantly and adapt their teaching styles to their students’ evolving needs. Furthermore, AI systems would avoid putting further strain on the educational system that has already been stretched thin post-COVID.
While the potential benefits of AI in teacher coaching are vast, successfully implementing such a system requires careful consideration. An essential aspect of managing these AI systems is ensuring they are ethically used and respect teachers’ and students’ privacy. Confidentiality of data is paramount, and AI systems must be designed and regulated to ensure they comply with laws and ethical guidelines pertaining to data protection.
For example, our recent book, “The Economics of Equity in K-12 Education: A Post-Pandemic Policy Handbook for Closing the Opportunity Gap and Using Education to Improve the American Economy,” prominently features recommendations by professor Ryan Baker and University of Pennsylvania, emphasizing that the use of AI in education will require data sharing between schools and vendors using the latest advances in cryptography, like zero knowledge proofs, to secure sensitive information.
Additionally, AI systems need regular fine-tuning to remain effective and relevant. This process would involve an ongoing cycle of feedback from teachers, AI developers and education experts to ensure the AI evolves in line with the changing dynamics of classrooms and the educational landscape. For instance, updates in curriculum, pedagogical strategies and teaching methods should be reflected in the AI’s feedback and coaching suggestions.
Ultimately, AI is a tool, rather than a replacement for human connection and judgment: Decisions must remain with the educators and administrators. AI can provide data-driven insights and recommendations, but it’s the teachers and administrators who will interpret this information in the context of their unique classroom environments and make the final decisions.
The pioneering work by Spokane Public Schools represents a novel attempt to solve the longstanding challenge of the deterioration in student learning outcomes driven, at least in large part, by the decline in teacher quality and absence of incentives. With careful management and continuous refinement, AI systems could revolutionize teacher coaching, significantly improving the quality of K-12 education across the nation.
While challenges remain, the path forward shows immense promise, offering hope to educators and students alike.
Goldy Brown III is an associate professor in the Graduate School of Education at Whitworth University in Spokane . He is also the director of Whitworth University’s Educational Administration Program. He was a school principal for almost a decade. Schools that he administered earned four state recognition awards for closing the achievement gap between low-income and affluent students.
Christos A. Makridis is a research affiliate at Stanford University’s Digital Economy Lab and COO/co-founder of Living Opera, a multimedia startup focused on empowering and educating performing artists. He holds doctorates and masters degrees in economics and management science and engineering from Stanford University.
Should we ban ransomware payments? It’s an attractive but dangerous idea
This article was originally published on Cointelegraph Magazine.
A successful cyberattack on critical infrastructure — such as electricity grids, transportation networks or healthcare systems — could cause severe disruption and put lives at risk.
Our understanding of the threat is far from complete since organizations have historically not been required to report data breaches, but attacks are on the rise according to the Privacy Rights Clearinghouse. A recent rule from the United States Securities and Exchange Commission should help clarify matters further by now requiring that organizations “disclose material cybersecurity incidents they experience.”
As the digital world continues to expand and integrate into every facet of society, the looming specter of cyber threats becomes increasingly more critical. Today, these cyber threats have taken the form of sophisticated ransomware attacks and debilitating data breaches, particularly targeting essential infrastructure.
A major question coming from policymakers, however, is whether businesses faced with crippling ransomware attacks and potentially life threatening consequences should have the option to pay out large amounts of cryptocurrency to make the problem go away. Some believe ransoms be banned for fear of encouraging ever more attacks.
Following a major ransomware attack in Australia, its government has been considering a ban on paying ransoms. The United States has also more recently been exploring a ban. But other leading cybersecurity experts argue that a ban does little to solve the root problem.
Ransomware and the ethical dilemma of whether to pay the ransom
At the most basic level, ransomware is simply a form of malware that encrypts the victim’s data and demands a ransom for its release. A recent study by Chainalysis shows that crypto cybercrime is down by 65% over the past year, with the exception of ransomware, which saw an increase.
“Ransomware is the one form of cryptocurrency-based crime on the rise so far in 2023. In fact, ransomware attackers are on pace for their second-biggest year ever, having extorted at least $449.1 million through June,” said Chainalysis.
Even though there has been a decline in the number of crypto transactions, malicious actors have been going after larger organizations more aggressively. Chainalysis continued:
Big game hunting — that is, the targeting of large, deep-pocketed organizations by ransomware attackers — seems to have bounced back after a lull in 2022. At the same time, the number of successful small attacks has also grown.”
The crippling effect of ransomware is especially pronounced for businesses that heavily rely on data and system availability.
The dilemma of whether to pay the ransom is contentious. On one hand, paying the ransom might be seen as the quickest way to restore operations, especially when lives or livelihoods are at stake. On the other hand, succumbing to the demands of criminals creates a vicious cycle, encouraging and financing future attacks.
Organizations grappling with this decision must weigh several factors, including the potential loss if operations cannot be restored promptly, the likelihood of regaining access after payment, and the broader societal implications of incentivizing cybercrime. For some, the decision is purely pragmatic; for others, it’s deeply ethical.
Should paying ransoms be banned?
The increasing incidence of ransomware attacks has ignited a policy debate: Should the payment of ransoms be banned? Following a major ransomware attack on Australian consumer lender Latitude Financial, in which millions of customer records and IDs were stolen, some have begun to advocate for a ban on paying the ransom as a way of deterring attacks and depriving cybercriminals of their financial incentives.
In the United States, the White House has voiced its qualified support for a ban. “Fundamentally, money drives ransomware and for an individual entity it may be that they make a decision to pay, but for the larger problem of ransomware that is the wrong decision… We have to ask ourselves, would that be helpful more broadly if companies and others didn’t make ransom payments?” said Anne Neuberger, deputy national security advisor for cyber and emerging technologies in the White House.
While proponents argue that it will deter criminals and reorient priorities for C-suite executives, critics, however, warn that a ban might leave victims in an untenable position, particularly when a data breach could lead to loss of life, as in the case of attacks on healthcare facilities.
“The prevailing advice from the FBI and other law enforcement agencies is to discourage organizations from paying ransoms to attackers,” Jacqueline Burns Koven, head of cyber threat intelligence for Chainalysis, tells Magazine.
“This stance is rooted in the understanding that paying ransoms perpetuates the problem, as it incentivizes attackers to continue their malicious activities, knowing that they can effectively hold organizations hostage for financial gain. However, some situations may be exceptionally dire, where organizations and perhaps even individuals face existential threats due to ransomware attacks. In such cases, the decision to pay the ransom may be an agonizing but necessary choice. Testimony from the FBI recognizes this nuance, allowing room for organizations to make their own decisions in these high-stakes scenarios, and voiced opposition to an all out ban on payments.”
Another complicating factor is that an increasing number of ransomware attacks, according to Chainalysis, may not have financial demands but instead focus on blackmail and other espionage purposes.
“In such cases, there may be no feasible way to pay the attackers, as their demands may go beyond monetary compensation… In the event that an organization finds itself in a situation where paying the ransom is the only viable option, it is essential to emphasize the importance of reporting the incident to relevant authorities.”
“Transparency in reporting ransomware attacks is crucial for tracking and understanding the tactics, techniques and procedures employed by malicious actors. By sharing information about attacks and their aftermath, the broader cybersecurity community can collaborate to improve defenses and countermeasures against future threats,” Koven continues.
Could we enforce a ban on paying ransomware attackers?
Even if a ban were implemented, a key challenge is the difficulty in enforcing it. The clandestine nature of these transactions complicates tracing and regulation. Furthermore, international cooperation is necessary to curb these crimes, and achieving a global consensus on a ransom payment ban might be challenging.
While banning ransom payments could encourage some organizations to invest more in robust cybersecurity measures, disaster recovery plans and incident response teams to prevent, detect and mitigate the impact of cyberattacks, it still amounts to penalizing the victim and making the decision for them.
“Unfortunately, bans on extortions have traditionally not been an effective way to reduce crime — it simply criminalizes victims who need to pay or shifts criminals to new tactics,” says Davis Hake, co-founder of Resilience Insurance who says claims data over the past year shows that while ransomware is still a growing crisis, some clients are already taking steps toward becoming more cyber-resilient and able to withstand an attack.
“By preparing executive teams to deal with an attack, implementing controls that help companies restore from backups, and investing in technologies like EDR and MFA, we’ve found that clients are significantly less likely to pay extortion, with a significant number not needing to pay it at all. The insurance market can be a positive force for incentivizing these changes among enterprises and hit cybercriminals where it hurts: their wallets,” Hake continues.
The growing threat and risk of cyberattacks on critical infrastructure
The costs of ransomware attacks on infrastructure are often ultimately borne by taxpayers and municipalities that are stuck with cleaning up the mess.
To understand the economic effects of cyberattacks on municipalities, I released a research paper with several faculty colleagues, drawing on all publicly reported data breaches and municipal bond market data. In fact, a 1% increase in the county-level cyberattacks covered by the media leads to an increase in offering yields ranging from 3.7 to 5.9 basis points, depending on the level of attack exposure. Evaluating these estimates at the average annual issuance of $235 million per county implies $13 million in additional annual interest costs per county.
One reason for the significant adverse effects of data breaches on municipalities and critical infrastructure stems from all the interdependencies in these systems. Vulnerabilities related to Internet of Things (IoT) and industrial control systems (ICS) increased at an “even faster rate than overall vulnerabilities, with these two categories experiencing a 16% and 50% year over year increase, respectively, compared to a 0.4% growth rate in the number of vulnerabilities overall, according to the X-Force Threat Intelligence Index 2022 by IBM.
A key factor contributing to this escalating threat is the rapid expansion of the attack surface due to IoT, remote work environments and increased reliance on cloud services. With more endpoints to exploit, threat actors have more opportunities to gain unauthorized access and wreak havoc.
“Local governments face a significant dilemma… On one hand, they are charged with safeguarding a great deal of digital records that contain their citizens’ private information. On the other hand, their cyber and IT experts must fight to get sufficient financial support needed to properly defend their networks,” says Brian de Vallance, former DHS assistant secretary.
Public entities face a number of challenges in managing their cyber risk — the top most is budget. IT spending accounted for less than 0.1% of overall municipal budgets, according to M.K. Hamilton & Associates. This traditional underinvestment in security has made it more and more challenging for these entities to obtain insurance from the traditional market.”
Cybersecurity reform should involve rigorous regulatory standards, incentives for improving cybersecurity measures and support for victims of cyberattacks. Public-private partnerships can facilitate sharing of threat intelligence, providing organizations with the information they need to defend against attacks. Furthermore, federal support, in the form of resources or subsidies, can also help smaller organizations – whether small business or municipalities – that are clearly resource constrained so they have funds to invest more in cybersecurity.
Toward solutions
So, is the solution a market for cybersecurity insurance? A competitive market to hedge against cyber risk will likely emerge as organizations are increasingly required to report material incidents. A cyber insurance market would still not solve the root of the problem: Organizations need help becoming resilient. Small and mid-sized businesses, according to my research with professors Annie Boustead and Scott Shackelford, are especially vulnerable.
“Investment in digital transformation is expected to reach $2T in 2023 according to IDC and all of this infrastructure presents an unimaginable target for cybercriminals. While insurance is excellent at transferring financial risk from cybercrime, it does nothing to actually ensure this investment remains available for the business,” says Hake, who says there is a “huge opportunity” for insurance companies to help clients improve “cyber hygiene, reduce incident costs, and support financial incentives for investing in security controls.”
Encouragingly, Hake has noticed a trend for more companies to “work with clients to provide insights on vulnerabilities and incentivize action on patching critical vulnerabilities.”
“One pure-technology mitigation that could help is SnapShield, a ‘ransomware activated fuse,’ which works through behavioral analysis,” says Doug Milburn, founder of 45Drives. “This is agentless software that runs on your server and listens to traffic from clients. If it detects any ransomware content, SnapShield pops the connection to your server, just like a fuse. Damage is stopped, and it is business as usual for the rest of your network, while your IT personnel clean out the infected workstation. It also keeps a detailed log of the malicious activity and has a restore function that instantly repairs any damage that may have occurred to your data,” he continues.
Ransomware attacks are also present within the crypto market, and there is a growing recognition that new tools are needed to build on-chain resilience. “While preventative measures are important, access controlled data backups are imperative. If a business is using a solution, like Jackal Protocol, to routinely back up its state and files, it could reboot without paying ransoms with minimal losses,” said Eric Waisanen, co-founder of Astrovault.
Ultimately, tackling the growing menace of cyber threats requires a holistic approach that combines policy measures, technological solutions and human vigilance. Whether a ban on ransom payments is implemented, the urgency of investing in robust cybersecurity frameworks cannot be overstated. As we navigate an increasingly digital future, our approach to cybersecurity will play a pivotal role in determining how secure that future will be.
Emory Roane, policy counsel at PRCD, says that mandatory disclosure of cyber breaches and offering identity theft protection services are essential, but it “still leaves consumers left to pick up the pieces for, potentially, a business’ poor security practices.”
But the combination of mandatory disclosure and the threat of getting sued may be the most effective. He highlights the California Consumer Privacy Act.
“It provides a private right of action allowing consumers to sue businesses directly in the event that a business suffers a data breach that exposes a consumer’s personal information and that breach was caused by the business’ failure to use reasonable security measures,” Roane explains. That dovetails with a growing recognition that data is an important consumer asset that has long been overlooked and transferred to companies without remuneration.
Greater education around cybersecurity and data sovereignty will not only help consumers stay alert to ongoing threats — e.g., phishing emails — but also empower them to pursue and value more holistic solutions to information security and data sharing so that the incidence of ransomware attacks is lower and less severe when they do happen.
Bans rarely work, if for no other reason than enforcement is either physically impossible or prohibitively expensive. Giving into ransoms is not ideal, but neither is penalizing the entity that is going through a crisis. What organizations need are better tools and techniques – and that is something that the cybersecurity industry, in collaboration with policymakers, can help with through new technologies and the adoption of best practices.
Data as Currency
This article was originally published in Wall Street Journal (with Joel Thayer).
America’s antitrust policies are stuck in the 1980s. That was when courts and regulators began relying on what’s called the consumer-welfare standard. Articulated in Robert Bork’s 1978 book, “The Antitrust Paradox,” the standard replaced classical antitrust analysis, which focused primarily on promoting competition. Courts and regulators are supposed to take into account a variety of consumer benefits, including lower prices, increased innovation and a better product quality.
But scholars, courts and regulators have ignored Bork’s multifaceted tests and obstinately focused on price alone. The result, 40 years later, is that a few tech giants have been able to dominate the market. The problem is that their offering of free services presents a new challenge for measuring anticompetitive harm and consumer welfare. If price alone is our measure, it’s hard to argue that free services are bad for consumers.
Legal analysts have difficulties applying nonprice factors to tech companies even when confronted with such demonstrations of monopoly as viewpoint-based censorship and imposing rents on developers of apps and ad tech—or even such demonstrations of actual consumer harm as privacy violations or pass-through costs on digital goods.
These tech platforms have enabled instant communication, e-commerce, information search and political engagement. In exchange for these services, customers provide data. In a new working paper, we argue that data is the new currency that these tech behemoths are capitalizing. Every click, every interaction and every transaction feeds the digital economy.
In this light, the concept of free services is misleading, because consumers do pay a price by giving away their data. Worse, they do so often without understanding the full implications. These facts demand recalibration of the consumer-welfare standard to protect consumers’ rights and promote competitive markets. Data is more than just a digital footprint. It is a resource that tech companies exploit to amass control and wealth. The power dynamics in this exchange remain unbalanced, with consumers often unaware of the value of their data.
Some courts and scholars have argued that these harms are speculative and difficult to quantify. But there is a metric by which we can more accurately measure whether consumer welfare is served by tech companies: the amount of data they collect in exchange for those free services. Our paper explains several methods for deriving the value of data, especially from financial markets and structural methods. In general, these methods look at the role data plays in the production of goods and services.
Google, for example, required few data points from users when it made its search service available in 1997. Today it requires near-constant access to its users’ geolocation, spending habits and time spent on other sites. A judge could evaluate whether Google is arbitrarily requiring its users to provide more data—akin to raising the price of a product—solely to avail itself of ad revenues and more market share. To do so would be to engage in anticompetitive behavior. Antitrust law doesn’t allow this type of behavior in any other context.
Consumers run the risk and gain little new benefit every time tech companies pilfer more data from them. Even with the increase in data they obtain, the quality of their services remains virtually unchanged. These companies collect this data with few safeguards. And thanks to their buying out or merging with other companies, they lack any meaningful competitors.
Big Tech has, in effect, made data a new currency, which functions as the basis of many Big Tech companies’ business models. In the face of today’s data-driven digital markets, the fact that data is currency should compel us to revisit how we think about antitrust harm and what constitutes a competitive tech market.
Men over 45 are working fewer hours. New research
This article was originally published in Fast Company.
There are no shortages of anecdotes when it comes to people sharing strong opinions about remote work and its effects on productivity and the tendency to slack off. These narratives are important, but they may not tell the whole story. Fortunately, newly available data from the American Time Use Survey (ATUS) by the Bureau of Labor Statistics provides some insight.
MEASURING TIME SPENT IN DIFFERENT ACTIVITIES
The ATUS is the only federal survey providing data on the full range of nonmarket activities, including the amount of time people spend on paid work, childcare, volunteering, and socializing. Individuals in the ATUS are drawn from the sample of respondents in the Current Population Survey as they are exiting.
One of the major benefits of the ATUS is that it measures a wide array of activities, not just time at work, like many existing surveys. This allowed me in my research to differentiate between work, leisure, household chores, childcare, and more.
Another major benefit of the ATUS is that it collects detailed 24-hour time diaries in which respondents report all the activities from the previous day in time intervals. These records are not only more detailed but also more reliable than standard measures of time allocated to work available in other federal datasets that require respondents to recall how much they worked over the previous year or week. These diaries contain much less noise than typical survey results.
UNCOVERING CHANGES IN TIME USE AMONG REMOTE WORKERS
Drawing on ATUS data from 2019 to 2022 among employed workers between the ages of 25 and 65, my new research paper documents new trends on time use, distinguishing between those in more- versus less-remote work jobs.
To measure remote work, I use an index by professors Jonathan Dingel and Brent Neiman at the University of Chicago, reflecting the degree to which tasks in a given occupation can be done remotely versus in person.
WORK TIME SHRUNK BY NEARLY AN HOUR
The first main result is that time allocated to work activities declined by nearly an hour among remote workers in 2022, relative to their 2019 trend before the pandemic, and time allocated toward leisure grew by about 40 minutes. The remainder of the time appears to have gone toward activities that are not otherwise classified, which might reflect scrolling on social media.
Your first instinct might be that time at work, of course, declined, but that’s because people are simply spending less on their commutes. While that is true, it doesn’t explain the sustained decline in time at work and increase in leisure from 2020 to 2022.
Furthermore, I ran separate models to differentiate between “pure work” and “work-related activities”—the latter including travel time to work. All of the changes in time at work come from “pure work,” rather than other categories related to travel or other income-generating activities.
But what’s even more striking is that the decline in work and rise in leisure is concentrated among males, singles, and those without children. In fact, single males over the age of 45 in remote jobs experienced a nearly two-hour decline in time allocated to work in 2022, relative to 2019, and over an hour increase in time allocated to leisure. This demographic divergence demonstrates the heterogeneity in responses to remote work.
Compare these patterns with those among women and caregivers. I found that college-educated women allocated an additional 50 minutes per day to work in 2022, relative to 2019. Among non-college-educated women, there were no statistically significant changes. I also found a nearly 30-minute-per-day increase in work among women with children. At least some of that increase in work is coming from a decline in home production activities, such as taking care of children and doing chores around the house, among the college-educated women.
IMPLICATIONS FOR PRODUCTIVITY AND THE LABOR MARKET
Do these results on remote work—especially for single males—simply reflect the phenomenon of quiet quitting, where employees disengage from work while remaining employed?
While more research is needed, the short answer appears to be no. In fact, I found that remote workers reported higher satisfaction with their lives and felt better rested. Remote workers also did not report more time allocated toward interviewing for other jobs. Cumulatively, these facts imply that changes in time use—at least since 2019—are not driven by disengagement.
These results have important implications for the debate about productivity. My other research has found that hybrid work arrangements may offer the best of both worlds.
For example, my research with Jason Schloetzer at Georgetown University using data from Payscale shows that the positive relationship between remote work and job satisfaction is statistically significant for hybrid workers only after accounting for differences in corporate culture. And even then, corporate culture dwarfs the economic significance of remote work.
Similarly, my work with Raj Choudhury, Tarun Khanna, and Kyle Schirmann at Harvard Business School using data from a randomized experiment in Bangladesh shows that workers on a hybrid schedule—working some days at home and some in the office—are more creative, send more emails, and feel like they have a better work-life balance relative to their fully remote or fully in-person peers.
It’s clear that remote work is not a one-size-fits-all phenomenon. While there are many benefits of remote work that come in the form of breaking down barriers and heightened flexibility, there are also new challenges that must be managed.
Crucially, we must be responsible to put into practice the right habits and processes to manage our time so that it does not drift away. Business leaders should help inculcate a culture of excellence by focusing on outcomes—not simply measures of hours worked—and lead by example.
The transformative role of water markets for a climate-changed future
This article was originally published in the Global Water Forum.
Water markets provide a mechanism for the efficient allocation of water resources based on market principles. In a water market, water rights can be bought and sold, allowing water to flow from areas of low value to areas of high value. Could this mechanism also play a significant role in addressing the challenges of transboundary water governance?
Enhancing efficiency
Newly released research published in the American Economic Review by Professor Will Rafey at the University of California Los Angeles provides valuable insights into the functioning and benefits of water markets (Rafey, 2023). Drawing on data from the largest water market in history, located in southeastern Australia, Rafey finds that water trading increased output by 4-6% from 2007 to 2015, equivalent to avoiding an 8-12% uniform decline in water resources. This indicates that water markets can significantly enhance the efficiency of water allocation and usage.
While there is a large body of research attempting to estimate the value of trading water rights, most studies have run into at least three challenges. First, there are practical realities that are tough to model with river systems, such as the costly and uncertain flow constraints. Second, there are also geographic and hydrological constraints, including changes in the ecosystem and climate that affect supply and behavior. Third, the set of feasible trades in the water network are subject to many constraints, such as the cost of moving water and the direction it flows.
Rafey takes a two-step approach that begins by estimating the production functions for water, which map irrigation volumes into agricultural output using producer-level longitudinal data on irrigation, physical output, and local rainfall. To address the traditional concern that some farms might be systematically more productive than others, thereby confounding the relationship between inputs and outputs due to unobserved differences, Rafey leverages the longitudinal nature of the data and the heterogeneity in how water sharing rules, also known as diversion formulas, evolve nonlinearly across space and time. Crucially, these diversion rules are not within the control of any individual farm, so they provide an external stimulus to study how output evolves. Then, Rafey links the water trading data with the production functions to estimate the realized value of trades, thereby sidestepping having to parameterize and specify the set of feasible trades and all the many constraints that go into water systems.
Policy implications
Rafey’s research is important for both methodological reasons and policy guidance. Methodologically, it shows how to estimate the value of trading in a setting where there is substantial stochasticity, absence of a complete market, and dynamic game-theoretic interactions without having to specify all these ingredients explicitly in the model. Instead, the two-step approach allows him to flexibly estimate the value of water trading.
In respect of policy, his results suggest:
There is growing institutional, including governmental, support for water markets. While market power and other frictions may exist, water markets have been proven to raise allocative efficiency. The estimated total gains from trade provide a lower bound on the value of maintaining the infrastructure required for water markets.
Australia’s experience with setting up and running water markets provides a template for other countries. They demonstrate that efficiency gains are possible using modern monitoring technology in an arid region. The extent of a river system’s underlying hydrological variability, which can be measured directly from historical river inflows and rainfall, is identified as an important source of water markets’ prospective value.
Especially in the presence of climatic change, water rights can play a substantial role in facilitating adaptation. Efficient annual trade should reallocate water from places of relative abundance to places of relative scarcity, lowering the costs of idiosyncratic variability across the river trading network. By increasing the productive efficiency of a basin’s aggregate water endowment, a water market makes drier years less costly, helping irrigators adapt to aggregate shocks. “Without water reallocation through the annual market, output would fall by the same amount as if farms faced a uniform reduction in water resources of 8–12 percent. By comparison, government climate models for this region predict surface water resources to decline by 11 percent in the median year under a 1°C increase in temperature by 2030,” said Rafey.
Although we have long known that water markets are important mechanisms for ensuring the efficient allocation of water resources, we have not known how much and how they depend on different conditions, such as varying diversion rules and a changing climate. This research provides the latest comprehensive evaluation on the importance of water markets and their value in the years ahead to help manage scarce resources in a stochastic world.
The role of water markets in transboundary governance?
Transboundary water governance is a complex social, political, and economic issue involving the management and allocation of water resources across political boundaries. It is a critical aspect of international relations, as water is a vital resource that is unevenly distributed across the globe. The governance of these resources is fraught with at least two major challenges.
First, water is a shared resource that does not respect political boundaries. Rivers, lakes, and aquifers often span across multiple countries, making it challenging to manage and allocate these resources equitably. Furthermore, the governance of transboundary water resources involves a multitude of stakeholders (eg, governments, local communities, non-governmental organizations, and private entities) each with different interests, priorities, and perceptions of how water resources should be managed, leading to conflicts and disagreements.
Second, the governance of transboundary water resources is further complicated by climate change, population growth, economic development, and more. These factors increase the demand for water and exacerbate the challenges of managing and allocating these resources.
The creation of water markets has the potential to help water managers meet these challenges by allocating supply and demand efficiently and quickly without central planning and in the face of a wide array of uncertainty, ranging from climatic change to macroeconomic shocks. Water managers and policymakers across the world should work together to build upon the successful lessons learned from Australia’s example in the Murray-Darling Basin.
Single, Remote Men Are Working Less
This article was originally published in City Journal.
The Covid-19 pandemic utterly transformed the world of work. But while employees across the globe have adapted to conducting business from their living rooms, CEOs and business leaders have struggled with this seismic shift, openly voicing their concerns about the impact of remote work on productivity, employee engagement, and corporate culture.
Some business leaders have come out strongly against working from home. “Remote work virtually eliminates spontaneous learning and creativity, because you don’t run into people at the coffee machine,” said Jamie Dimon, CEO of JPMorgan Chase. Others are more optimistic: “People are more productive working at home than people would have expected,” said Mark Zuckerberg, CEO of Facebook. And still others remain cautious: “Working from home makes it much harder to delineate work time from personal time. I encourage all of our employees to have a disciplined schedule for when you will work, and when you will not, and to stick to that schedule,” said Dan Springer, CEO of DocuSign.
But what do the data actually say? I recently released a paper, “Quiet Quitting or Noisy Leisure? The Allocation of Time and Remote Work, 2019-2022,” which documents trends by drawing on the latest data from the Bureau of Labor Statistics’ American Time Use Survey (ATUS).
Since there is no direct measure of fully remote, hybrid, or fully in-person work arrangements in the ATUS, I focus on an index, introduced in 2020 by the University of Chicago’s Jonathan Dingel and Brent Neiman, that measures the degree to which tasks within an occupation can be done remotely. The index also happens to do a good job of identifying what sorts of jobs people are probably working remotely in—with the caveat that an employee at a company in Texas could differ in their work arrangement from a New York worker with the same occupation but a different employer.
I discovered three things. First, remote workers allocated roughly 50 minutes less per day to work activities and 37 more minutes per day to leisure activities in 2022, relative to 2019. Time allocated to home production, such as chores and caring for other household members, did not change.
Second, and perhaps more importantly, these declines are concentrated among males, singles, and those without children. In fact, single males over the age of 45 working remotely spend more than two hours less per day in work activities in 2022, relative to 2019. If anything, college-educated females are the ones who have increased their time at work slightly.
Third, changes in the allocation of time cannot be explained by job-search activity or declines in well-being. If these declines in labor hours were driven by “quiet quitting,” then remote workers would be spending more time searching for other jobs or would feel worse about life overall.
These findings underscore the complexity of the remote-work revolution. It is not merely a binary shift from the office to the home but a complex reordering of our daily lives with far-reaching implications. For businesses, understanding these changes—and especially recognizing the challenges that different demographic brackets are struggling with—is critical for managing workforce expectations and productivity. As we navigate this new landscape, it’s essential to look beyond the surface-level changes and grapple with the deeper shifts in how we allocate our time.