Migration, Borders, and Human Mobility: A World in Motion
Introduction: The Pulse Beneath the Map
Migration is not a crisis. It is a rhythm. A pulse beneath the map. A movement older than nations, older than borders, older than the very idea of belonging. From the first footsteps across savannahs to the crowded terminals of today’s airports, human mobility has always been a story of survival, of hope, of necessity. But in the 21st century, this ancient rhythm collides with modern anxieties: climate collapse, economic disparity, political instability, and the tightening grip of nationalism. This essay explores the evolving landscape of migration through three interwoven threads: forced displacement, climate migration, and labor mobility. It examines how these forces are reshaping national policies, challenging the balance between humanitarian duty and border control, and demanding new forms of international cooperation. It is not a plea, nor a protest - but a reflection. A reckoning with the ethics of movement in a world that both needs and fears it.
I. The Anatomy of Movement: Why People Leave
Migration begins with rupture. A war that empties a village. A drought that cracks the soil. A job that never materializes. People do not move for pleasure - they move because staying has become impossible. Forced displacement, whether driven by conflict or persecution, is not a choice but a last resort. Refugees carry not only their belongings but the weight of lost homes, lost languages, lost futures. Climate migration adds a new layer to this rupture. It is slower, more insidious. Rising seas, failing crops, unbearable heat - these are not bombs, but they destroy lives just the same. The climate migrant is often invisible, unrecognized by legal frameworks, caught between the definitions of refugee and economic migrant. Yet their movement is no less urgent. Labor mobility, by contrast, is often framed as opportunity. But beneath the surface lies a complex web of exploitation, aspiration, and inequality. Migrant workers build cities they cannot afford to live in. They clean homes they will never own. Their labor sustains economies, yet their rights remain precarious.
II. The Border as Symbol and Barrier
Borders are not just lines on a map - they are symbols of sovereignty, identity, and fear. They are where politics meets geography, where law meets longing. In recent years, borders have hardened. Walls have risen. Visas have tightened. Surveillance has intensified. The migrant is no longer just a traveler - they are a suspect. This securitization of borders reflects a deeper anxiety: the fear of the other. Migration is framed as invasion, as burden, as threat. Yet borders do not stop movement - they only make it more dangerous. Smugglers thrive. Lives are lost. Families are separated. The border becomes a site of suffering, not safety. And yet, borders also hold the possibility of welcome. They can be gates, not walls. Bridges, not barriers. The challenge lies in reimagining the border not as a fortress, but as a threshold - a place where humanity is not suspended, but affirmed.
III. The Ethics of Hospitality: Between Security and Solidarity
How does a nation balance its right to control entry with its obligation to protect the vulnerable? This is the central tension of migration policy. Too often, security trumps solidarity. Humanitarian commitments are eroded by political expediency. Refugees are turned away. Asylum systems are overwhelmed. Integration is underfunded. But ethical migration policy is possible. It begins with recognition: that migrants are not threats, but people. That borders must serve justice, not just sovereignty. That integration is not assimilation, but mutual transformation. Education, housing, healthcare - these are not luxuries, but necessities for dignity. Solidarity also requires listening. To migrant voices. To host communities. To the fears and hopes on both sides. It is not enough to open borders - we must open conversations. Only then can integration be more than a policy - it can be a practice of coexistência.
IV. Global Cooperation: A Shared Responsibility
Migration is not a national issue - it is a global one. No country can manage it alone. Climate migration, in particular, demands transnational solutions. Shared data, shared resources, shared commitments. The Global Compact for Migration was a step, but more is needed: binding agreements, equitable resettlement, regional frameworks. International cooperation must also confront inequality. Why do some passports open every door, while others close every possibility? Why do some nations export labor, while others import it without accountability? Migration justice requires economic justice. It requires rethinking development, trade, and aid through the lens of mobility. And above all, it requires imagination. A vision of a world where movement is not criminalized, but coordinated. Where borders are managed ethically, not militarized. Where migration is not a problem to be solved, but a reality to be embraced.
V. Conclusion: A World in Motion
Migration is not a crisis. It is a condition. A constant. A mirror held up to the world’s inequalities and aspirations. To manage it ethically and sustainably, we must move beyond fear. Beyond walls. Beyond the illusion of stasis. This essay has traced the contours of human mobility - its causes, its challenges, its possibilities. It has argued for a migration policy rooted in dignity, cooperation, and courage. Because in the end, we are all migrants. Across time. Across space. Across the borders of our own becoming. And perhaps, the question is not how to stop movement - but how to move together.
Bibliography
- Castles, S., de Haas, H., & Miller, M. J. (2014). The Age of Migration: International Population Movements in the Modern World. London: Palgrave Macmillan.
- Santos, B. de S. (2002). The Critique of Lazy Reason: Against the Waste of Experience. Porto: Afrontamento.
- Han, B.-C. (2017). The Transparency Society. Stanford: Stanford University Press.
- Gemenne, F. (2011). Why the Numbers Don’t Add Up: A Review of Estimates and Predictions of People Displaced by Environmental Changes. Global Environmental Change, 21(S1), S41–S49.
- Appadurai, A. (1996). Modernity at Large: Cultural Dimensions of Globalization. Minneapolis: University of Minnesota Press.
- International Organization for Migration (IOM). (2022). World Migration Report. Geneva: IOM.
Climate Action and Environmental Sustainability: A Civilizational Imperative
Introduction
Climate action is no longer a political choice or a niche environmentalist agenda. In 2025, it stands as a civilizational imperative. The climate crisis is not merely an environmental threat-it is a threat to geopolitical stability, food security, public health, and the very continuity of life as we know it. The planet is on high alert: record-breaking heatwaves, increasingly frequent extreme events, silent ecological collapses, and mounting pressure on natural resources. In this scenario, environmental sustainability emerges as a structuring axis of public policies, business strategies, and multilateral pacts.
The End of Climate Neutrality
Climate neutrality, once seen as a distant goal, has become a starting point. Countries such as Germany, Japan, and Chile have already incorporated net-zero targets into their national plans, while blocs like the European Union advance regulations that penalize emissions and reward green innovation. China, the world’s largest emitter, has committed to neutrality by 2060, and the United States, despite political setbacks, maintains robust investments in clean energy and resilient infrastructure.
The energy transition is at the heart of this shift. Replacing fossil fuels with renewable sources-solar, wind, green hydrogen-is not just an environmental issue but an economic and strategic one. Clean energy generates jobs, reduces geopolitical dependencies, and strengthens energy sovereignty. In countries like Indonesia, international partnerships are mobilizing billions of dollars to ensure a just transition that does not leave vulnerable communities behind.
Ethical Regulation of Artificial Intelligence in the Public Sector: Between Innovation and Responsibility
Artificial Intelligence (AI) is quietly transforming the way governments operate. From algorithms that help distribute social benefits to systems that analyze criminal patterns or optimize urban traffic, AI promises to make public services faster, more efficient, and more personalized. But with that promise comes a warning: if not properly regulated, AI can reinforce inequalities, compromise fundamental rights, and erode citizens’ trust in democratic institutions. This article offers a comprehensive reflection on how to ensure that AI, when serving the State, is also an ally of justice, transparency, and human dignity.
What’s at Stake: Real Ethical Risks
Unlike the private sector, where mistakes may cost money or reputation, in the public sector AI impacts human lives directly. A poorly calibrated algorithm can deny essential subsidies, unjustly classify a citizen as suspicious, or exclude someone from priority medical treatment.
1. Algorithmic Bias: When the Past Contaminates the Future
Algorithms learn from historical data. But what if that data is tainted by racial, gender, or class discrimination? The result is a system that perpetuates injustice. We've seen this in risk assessment tools that disproportionately penalize minorities, or in social triage systems that overlook historically underserved communities.
The solution isn’t just “fixing the code,” but defining what justice means in each context. Equal opportunity? Equitable outcomes? Fair distribution of errors? Regulation must require rigorous fairness testing before and after implementation, with clear and justified metrics.
2. The Black Box: Decisions Without Explanation
Many AI models, especially advanced ones, are opaque. Not even their creators can explain how they reached a given conclusion. In the public sector, this is unacceptable. If a citizen is harmed by an automated decision, they have the right to know why.
Regulation must demand proportional levels of explainability: the greater the impact, the greater the obligation to provide understandable justifications. In critical cases, simpler and more transparent models may be preferable-even if less sophisticated.
3. Who Is Responsible When Things Go Wrong?
If an algorithm makes a serious mistake, who is accountable? The programmer? The vendor? The public official who clicked “approve”? The answer cannot be vague. Legislation must establish clear responsibilities, require evidence of best practices, and ensure the State is accountable for the systems it uses.
4. Privacy: The New Battleground
AI needs data-lots of it. But when the State cross-references health, education, security, and financial information, the risk of abuse is enormous. Worse still, seemingly harmless data can be used to infer intimate details, such as sexual orientation or health status.
Regulation must be firm: collect only what is necessary, protect data with advanced techniques, and prohibit secondary uses without explicit consent. Surveillance disguised as efficiency cannot be tolerated.
How to Regulate Effectively: From Theory to Practice
Talking about ethics is easy. The challenge is turning principles into clear, enforceable, and auditable rules. Here are some concrete proposals:
1. Laws with Teeth
We must go beyond voluntary guidelines. Legislation should classify AI systems by risk level and impose proportional obligations. High-risk applications-such as in justice, health, or social welfare-must undergo mandatory prior assessments, independent audits, and continuous monitoring.
2. Independent Oversight Bodies
We cannot rely on agencies using AI to self-regulate. Independent bodies with technical experts and real powers must be established—to audit, sanction, suspend systems, and hear citizen complaints.
3. Mandatory Impact Assessments
Just as environmental projects require impact studies, AI systems must be evaluated before deployment. This includes analyzing bias risks, technical failures, social impacts, and contingency plans.
Ethics in Institutional Daily Life
External regulation is essential, but the internal culture of public institutions must also evolve.
1. Ethics Committees and Multidisciplinary Teams
AI projects should be reviewed by ethics committees composed of legal experts, sociologists, sector specialists, and community representatives. Development teams must include not only engineers but also professionals with ethical and legal sensitivity.
2. Contracts with Ethical Clauses
Many systems are purchased from private vendors. Contracts must demand transparency about data used, validation methods, and bias mitigation mechanisms. They must also guarantee audit rights-even for proprietary systems.
3. Real Human Oversight
Human oversight must not be symbolic. Officials must have time, training, and authority to review automated decisions. Certain decisions-such as denying benefits or applying sanctions-must always be validated by a human.
Democracy, Trust, and Participation
AI ethics is not just technical-it’s political. It involves public trust, democratic legitimacy, and respect for fundamental rights.
1. Mandatory Public Consultation
Before implementing high-impact AI systems, the State must consult citizens. This means explaining systems in accessible language, holding public workshops, and responding to concerns raised.
2. Clear Limits on Use
The temptation to expand a system’s use beyond its original purpose-known as “mission creep”-must be curbed. Any change in function or data scope must trigger new evaluation and consultation.
3. Training and Institutional Culture
Public servants must be trained in algorithmic literacy, digital ethics, and fundamental rights. Ethics must be part of organizational culture—not an afterthought.
International Cooperation and Global Standards
AI knows no borders. Ethical regulation must be harmonized internationally, with shared principles, metrics, and minimum protection standards. This prevents governments from turning to less scrupulous providers and ensures citizens’ rights are respected regardless of jurisdiction.
Conclusion: Regulate to Protect and Innovate
Regulating AI in the public sector is not about halting progress-it’s about ensuring progress serves people. It is possible to innovate responsibly, use technology consciously, and build a digital future that respects human dignity.
The key lies in combining clear laws, independent oversight, internal ethical culture, and citizen participation. Only then can AI become an ally of democracy-and not a silent threat.
Bibliography
- European Commission. (2021). Proposal for a Regulation on Artificial Intelligence (AI Act).
- Latonero, M. (2018). “Governing Artificial Intelligence: Upholding Human Rights & Dignity.” Data & Society.
- Eubanks, V. (2017). Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. St. Martin’s Press.
- Jobin, A., Ienca, M., & Vayena, E. (2019). “The global landscape of AI ethics guidelines.” Nature Machine Intelligence, 1(9), 389–399.
- OECD. (2020). OECD Principles on Artificial Intelligence.
Artificial Intelligence and Ethical Regulation
Artificial Intelligence is increasingly influencing various facets of governance globally. The regulation of AI has become a pressing concern, with a significant need to balance innovation against risks related to privacy and fairness. This essay will discuss the fragmented state of AI regulation, the challenges policymakers face, emerging global frameworks for governance, and potential future developments related to this critical field.
The explosive growth of AI technologies presents unique opportunities along with unprecedented challenges. From enhancing public service delivery to improving decision-making processes in government agencies, AI has shown its potential to revolutionize the way governance is conducted. However, as governments move to capitalize on these advancements, ethical concerns surrounding AI deployment have also surfaced. The complexities of AI-driven solutions require comprehensive regulatory frameworks to ensure responsible use, encourage innovation, and safeguard individual rights.
AI's potential to transform governance lies in its ability to analyze large data sets, predict trends, and automate tasks. For instance, AI applications are being integrated into public health systems for epidemic prediction and resource allocation. Similarly, smart governance initiatives leverage AI for traffic management and urban planning. Despite these advancements, the deployment of AI raises ethical questions, particularly regarding accountability and transparency. This scenario calls for a meticulous approach towards regulation that fosters innovation while mitigating risks.
The current state of AI regulation is often characterized as fragmented. Different jurisdictions implement diverse regulatory frameworks, leading to inconsistencies that can hinder technological advancement. For example, the European Union has proposed the AI Act, which aims to harmonize regulations across member states by categorizing AI technologies based on their risk levels. In contrast, other regions may adopt a laissez-faire approach, which could lead to technological innovations unfettered by regulatory oversight. This divergence can create challenges for multinational companies that operate across borders, as they must navigate varying regulatory landscapes.
Policymakers are engaged in an ongoing debate about how best to balance the need for innovation with the imperative to address risks associated with AI. The potential for algorithmic bias, privacy violations, and lack of accountability in AI systems raises significant concerns. Moreover, as AI systems increasingly impact decision-making in sensitive areas such as criminal justice, hiring, and healthcare, the stakes become even higher. A case in point is the use of predictive policing algorithms that may inadvertently perpetuate systemic biases, leading to unfair treatment of marginalized communities.
To address these pressing concerns, many governments are seeking to adopt a more ethical approach to AI regulation. Ethical AI principles emphasize fairness, accountability, and transparency. However, translating these abstract principles into effective regulatory frameworks is a complex challenge. Policymakers must consider the technical intricacies of AI while balancing competing interests from various stakeholders, including technology firms, civil society, and the public.
It is evident that any regulatory framework must be flexible enough to accommodate the rapid evolution of AI technologies. Static regulations may quickly become obsolete, stifling the potential benefits of innovation. As such, adaptive regulation that can evolve alongside technological advancements is crucial. Policymakers are increasingly recognizing the need to engage with technology experts, ethicists, and industry stakeholders to formulate effective regulations that do not hinder innovation.
In recent years, there has been a push for global frameworks to govern AI, prompted by the understanding that AI's impact transcends national borders. International organizations such as the United Nations and the Organisation for Economic Co-operation and Development have begun to explore global AI governance principles. The establishment of these frameworks aims to prevent regulatory splintering and promote shared principles that can guide the responsible use of AI worldwide.
One of the influential figures in AI ethics and regulation is Timnit Gebru, who has advocated for the need to address bias and inequities in AI systems. Her work has highlighted the consequences of deploying AI technologies without adequate scrutiny. Another key contributor is Kate Crawford, whose research has underscored the importance of understanding the social implications of AI. Both Gebru and Crawford emphasize that ethical considerations must be integral to the development and deployment of AI systems.
In addition to individual contributions, academic institutions are also playing a critical role in shaping the discussion around AI ethics and regulation. Interdisciplinary research that bridges technology, law, and social sciences is emerging, which can inform policymakers about the broader implications of AI. Schools such as Stanford University and MIT have launched initiatives focused on AI ethics, gathering experts from various fields to foster collaborative dialogue and research.
In exploring various perspectives, it becomes clear that there is no one-size-fits-all solution to AI governance. Different industries may require tailored approaches based on the specific risks and ethical considerations associated with AI technologies. For example, the healthcare sector may prioritize patient privacy and data security, while the financial sector may be more focused on algorithmic transparency and accountability. Therefore, sector-specific regulations can complement overarching frameworks, providing a layered approach to governance.
Another significant challenge in regulating AI is the difficulty in defining "what constitutes AI. " The term encompasses a broad spectrum of technologies, from simpler automation tools to complex machine learning algorithms. This ambiguity complicates the development of regulatory measures. Policymakers must engage with technologists to reach a clear understanding of the capabilities and limitations of AI technologies to create regulations that effectively address the unique risks they entail.
Looking ahead, one of the most crucial areas of focus will be the integration of ethical considerations into the AI development lifecycle. Creating accountability mechanisms will be essential, ensuring that developers and organizations are responsible for the impacts their systems have on individuals and society at large. Establishing such mechanisms may involve mandating transparency in algorithmic decision-making processes, conducting bias assessments, and implementing robust data protection measures.
Furthermore, ongoing education and training for developers, policymakers, and the public are vital to facilitating a responsible AI ecosystem. As rapid advancements in AI technology outpace regulatory responses, continuous learning will enable stakeholders to adapt to emerging challenges. Public engagement is also necessary to foster trust in AI systems, as individuals must understand how these technologies operate and the implications they carry.
Additionally, as AI applications grow in sophistication, the collaboration between the public and private sectors will be pivotal. Partnerships between government agencies and technology companies can help develop best practices and standards that enhance ethical AI development. Such collaborations can also facilitate knowledge sharing and resource allocation, ensuring equitable access to AI innovations.
Another aspect to consider is the potential for technological solutions that enhance ethical regulation. For instance, blockchain technology may offer a means to establish accountability and transparency in AI systems. By enabling secure and traceable data transactions, blockchain could help mitigate risks associated with data privacy breaches and algorithmic transparency.
As AI continues to evolve, so too must the regulatory frameworks that govern its use. Engaging in ongoing dialogue among stakeholders will be crucial in navigating the challenges and opportunities presented by AI technologies. Policymakers must remain vigilant and adaptable, allowing for flexibility in regulations that reflect the dynamic nature of AI innovation.
In conclusion, the regulation of Artificial Intelligence stands at a crossroads. The need for a coherent approach that balances innovation with ethical considerations is paramount. Fragmentation in regulation poses risks not only to technological advancement but also to societal fairness and individual rights. Emerging global frameworks hold promise in fostering shared principles and preventing regulatory splintering. As stakeholders navigate this complex landscape, ongoing dialogue, sector-specific approaches, and adaptive regulations will be essential in creating an ethical framework that promotes responsible AI governance. The future of AI regulation will depend on our collective commitment to fostering innovation while prioritizing fairness, accountability, and transparency.
Bibliography:
Luciano Floridi – The Ethics of Artificial Intelligence Floridi is a leading figure in the philosophy of information and digital ethics. This work explores the ethical foundations of AI, proposing a human-centered approach focused on dignity and algorithmic responsibility. Published by the Oxford Internet Institute.
Ryan Binns – Fairness in Machine Learning: Lessons from Political Philosophy This article analyzes algorithmic fairness through the lens of political philosophy, offering a critical perspective on how AI systems can reproduce social inequalities. Presented at the ACM Conference on Fairness, Accountability, and Transparency (2020).
Pardis Kashefi et al. – Shaping the Future of AI: Balancing Innovation and Ethics in Global Regulation Published in the Uniform Law Review, this study addresses the challenges of international AI regulation, proposing adaptive models that reconcile innovation with ethical principles.
Brent Mittelstadt – Principles Alone Cannot Guarantee Ethical AI Published in Nature Machine Intelligence, this article argues that ethical principles, while essential, are insufficient without concrete mechanisms for implementation and accountability.
Anna Jobin, Marcello Ienca, Effy Vayena – The Global Landscape of AI Ethics Guidelines A comparative analysis of over 80 international documents on AI ethics, highlighting convergences and gaps. Published in Nature Machine Intelligence.
Alan Winfield & Marina Jirotka – Ethical Governance is Essential to Building Trust in Robotics and AI Systems This article, published by the Royal Society, argues that public trust in AI depends on robust and transparent ethical governance structures.
IEEE Global Initiative – Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems A reference document proposing ethical guidelines for the development of intelligent systems, with a focus on human well-being. Available through the IEEE Standards Association.
Kate Crawford – Atlas of AI Though more essayistic, this book offers a deep critique of the material and social infrastructures behind AI, revealing the ethical and environmental impacts of its expansion.
Timnit Gebru – Various articles and interventions on algorithmic bias and racial justice in AI Gebru has been a central voice in exposing the risks of algorithmic discrimination, especially in facial recognition systems and predictive policing.
European Commission – Proposal for a Regulation Laying Down Harmonised Rules on Artificial Intelligence (AI Act) A legislative document proposing a risk-based approach to AI regulation within the European Union, with global implications. Available on EUR-Lex.
References
- Bano, Muneera; Didar, Zowghi; Batool; Amna. “AI Governance: A Systematic Literature Review.” AI and Ethics, Springer, 1 Jun. 2025. Disponível em: https://link.springer.com/article/10.1007/s43681-024-00653-w
- Huriye, Aisha Zahid. “The Ethics of Artificial Intelligence: Examining the Ethical Considerations Surrounding the Development and Use of AI.” American Journal of Technology, GPR Journals, 25 Apr. 2023. Disponível em: https://gprjournals.org/journals/index.php/AJT/article/view/142
- Autor desconhecido. “Untitled.” ScienceDirect, artigo sobre regulação algorítmica, sem data especificada. Disponível em: https://www.sciencedirect.com/science/article/pii/S2667096823000125
- Autor desconhecido. “Untitled.” ScienceDirect, artigo sobre ética em IA, sem data especificada. Disponível em: https://www.sciencedirect.com/science/article/pii/S2666920X23000310
- Autor desconhecido. “Untitled.” ResearchGate, estudo sobre gestão de risco em projetos críticos com IA. Disponível em: https://www.researchgate.net/publication/392698292_AI-Driven_Project_Risk_Management_Leveraging_Artificial_Intelligence_to_Predict_Mitigate_and_Manage_Project_Risks_in_Critical_Infrastructure_and_National_Security_Projects
- Butson, Russell. “AI and Its Implications for Research in Higher Education: A Critical Dialogue.” Higher Education Research & Development, Taylor & Francis, 22 Jun. 2023. Disponível em: https://www.tandfonline.com/doi/full/10.1080/07294360.2023.2280200
- Presno Linera, Miguel Ángel. “Regulating AI from Europe: A Joint Analysis of the AI Act and the Framework Convention on AI.” European Journal of Law and Technology, Taylor & Francis, 25 Apr. 2025. Disponível em: https://www.tandfonline.com/doi/full/10.1080/20508840.2025.2492524
Humanitarian Health Care Prevention

VISIT, SUPPORT AND SHARE UNICEF

VISIT, SHARE AND FOLLOW WORLD HEALTH ORGANIZATION
WITH OUR ONLINE HEALTH PREVENTION WE SAVED IN LAST YEARS THOUSANDS OF LIFES
We provide free information and opinions as humanitarian aid
We don't raise funds
We don't accept donations
Copyrights protection are secured
WORLD HEALTH ORGANIZATION - NEWS AND TOP STORIES
Worldwide 59.5 million people are displaced –22 million more than a decade ago
Cancer is a large group of diseases that can start in almost any organ or tissue of the body when abnormal cells grow uncontrollably, go beyond their usual boundaries to invade adjoining parts of the body, and/or spread to other organs. The latter process is called metastasizing and is a major cause of death from cancer. A neoplasm and malignant tumour are other common names for cancer.
Cancer is the second leading cause of death globally, accounting for an estimated 9.6 million deaths, or one in six deaths, in 2018. Lung, prostate, colorectal, stomach, and liver cancer are the most common types of cancer in men, while breast, colorectal, lung, cervical, and thyroid cancer are the most common among women.
The cancer burden continues to grow globally, exerting tremendous physical, emotional, and financial strain on individuals, families, communities, and health systems. Many health systems in low- and middle-income countries are least prepared to manage this burden, and large numbers of cancer patients globally do not have access to timely quality diagnosis and treatment. In countries where health systems are strong, survival rates of many types of cancers are improving thanks to accessible early detection, quality treatment, and survivorship care.
- Cancer is a leading cause of death worldwide, accounting for nearly 10 million deaths in 2020, or nearly one in six deaths.
- The most common cancers are breast, lung, colon and rectum and prostate cancers.
- Around one-third of deaths from cancer are due to tobacco use, high body mass index, alcohol consumption, low fruit and vegetable intake, and lack of physical activity.
- Cancer-causing infections, such as human papillomavirus (HPV) and hepatitis, are responsible for approximately 30% of cancer cases in low- and lower-middle-income countries.
- Many cancers can be cured if detected early and treated effectively.
Cardiovascular diseases (CVDs) are the leading cause of death globally, taking an estimated 17.9 million lives each year. CVDs are a group of disorders of the heart and blood vessels and include coronary heart disease, cerebrovascular disease, rheumatic heart disease and other conditions. More than four out of five CVD deaths are due to heart attacks and strokes, and one third of these deaths occur prematurely in people under 70 years of age.
The most important behavioural risk factors of heart disease and stroke are unhealthy diet, physical inactivity, tobacco use and harmful use of alcohol. The effects of behavioural risk factors may show up in individuals as raised blood pressure, raised blood glucose, raised blood lipids, and overweight and obesity. These “intermediate risks factors” can be measured in primary care facilities and indicate an increased risk of heart attack, stroke, heart failure and other complications.
Cessation of tobacco use, reduction of salt in the diet, eating more fruit and vegetables, regular physical activity and avoiding harmful use of alcohol have been shown to reduce the risk of cardiovascular disease. Health policies that create conducive environments for making healthy choices affordable and available are essential for motivating people to adopt and sustain healthy behaviours.
Identifying those at the highest risk of CVDs and ensuring they receive appropriate treatment can prevent premature deaths. Access to noncommunicable disease medicines and basic health technologies in all primary health care facilities is essential to ensure that those in need receive treatment and counselling.
- Cardiovascular diseases (CVDs) are the leading cause of death globally.
- An estimated 17.9 million people died from CVDs in 2019, representing 32% of all global deaths. Of these deaths, 85% were due to heart attack and stroke.
- Over three-quarters of CVD deaths take place in low- and middle-income countries.
- Out of the 17 million premature deaths (under the age of 70) due to noncommunicable diseases in 2019, 38% were caused by CVDs.
- Most cardiovascular diseases can be prevented by addressing behavioural risk factors such as tobacco use, unhealthy diet and obesity, physical inactivity, and harmful use of alcohol.
- It is important to detect the cardiovascular disease as early as possible so that management with counselling and medicines can begin.
THE LANCET DIABETES & ENDOCRINOLOGY
THE LANCET INFECTIOUS DISEASES