The application of artificial intelligence (AI) within medical imaging is transforming the healthcare field and enabling doctors to achieve more accurate diagnoses more efficiently. AI in medical imaging is prominent in pathological image analysis, fundus imaging, and mammography, but its application to chest X-rays is currently narrowed. Chest X-rays are still widely used in medical diagnostics despite the numerous and more advanced imaging modalities. The capacity to combine AI into chest X-ray readers presents a potential pathway to improving accuracy and speeding up diagnosis. However, it brings along with it ethical considerations, as well as practical issues of implementation that must be addressed. This essay will consider the ethical issues and strategies for successfully implementing AI for chest X-rays and review existing literature on the current status of AI models for chest X-ray analysis.
Aim and Objectives
This research aims to explore the use and implications of artificial intelligence in chest X-ray reader technology and propose an implementable plan for replacing some degree of human decision-making with AI-based decision-making. The objectives are to analyze the current approaches, identify ethical considerations, propose strategies for implementation, and review existing literature on radiological AI models and chest X-ray analysis.
- What are the current approaches and advancements in medical imaging AI models?
- What are the ethical considerations of using artificial intelligence in chest X-ray readers?
- What strategies can be implemented to successfully integrate AI into chest X-ray readers?
- What is known about the existing literature on radiological AI models and chest X-ray analysis?
Current Approaches and Advancements in Medical Imaging AI Models
In the 1960s, computerized analysis of medical images helped specialists identify specific instances, such as X-ray anomalies, but until recently, it only served a limited purpose (He et al., 2020). With machine learning and deep learning, computer algorithms can accurately diagnose medical conditions. Convolutional neural networks (CNNs) detect patterns in data, while deep learning algorithms are “trained” on a data set to recognize patterns and make predictions. This process processes large amounts of data and selects and evaluates features to identify the most critical indicators, improving performance and accuracy. Machine learning and deep learning algorithms applied to medical imaging have enabled computing systems to accurately diagnose medical conditions from single images, enabling earlier disease detection and better patient care (Kavitha et al., 2023).
Convolutional Neural Networks
Convolutional neural networks (CNNs) have seen widespread use due to their ability to extract features from source images rapidly. Regarding chest X-rays, this capability can be utilized to progress onto more advanced tasks such as lesion detection or classification. These CNN-based programs are designed to autonomously detect abnormalities found in the pixel intensities and varying textures of images. This technology proves highly useful for the health sphere, providing crucial assistance for follow-up medical examinations such as identifying diseases or predicting treatments. Moreover, implementations of these algorithms could automate and revolutionize healthcare, enabling doctors to assess vast amounts of data with incredible accuracy and efficiency (Haluza & Jungwirth, 2023). At the heart of CNNs lies its layers. The most fundamental layer is the convolutional layer, where features are extracted by incorporating a function known as filters. A known aspect of this layer is that it allows for a certain level of invariance to understand different eye sightings of the same object. The output from the convolutional layer then passes through the fully connected layer before transiting to the non-linear squashing functions of either hyperbolic tangent or sigmoidal activation to normalize the result into the probabilistic values of the image classification. Subsequently, drops out are implemented to eradicate overfitting and avoid trust in specific pixels in an image that is deemed insignificant.
Deep Learning Algorithms
Deep learning algorithms are commonly applied due to their greater accuracy and robustness in comparison to traditional machine learning methods such as support vector machines (SVMs). These algorithms are adept at dealing with dynamic and changing environments, allowing them to identify patterns and features that may have been previously undetectable. This makes them helpful in recognizing subtle differences between images or data sets, which can result in more accurate diagnoses. Similarly, deep learning can be used to recognize structures and detect anomalies in medical images to aid in diagnosis or disease identification. Furthermore, deep learning models can aid in the study of the complexity or structure of the underlying data, giving a better understanding of how a given system works. Ultimately, deep learning is an invaluable tool for medical diagnosis and image analysis, with its potential benefits only expected to grow further in the near future.
More specifically, deep learning algorithms can be used to analyze radiographs, extracting certain features or patterns which would be difficult to detect through traditional methods due to the subtleness or complexity of the data (Noguerol et al., 2019). Furthermore, deep learning can be used to more accurately identify lesions, as well as identify abnormal cells in biological/medical images, improving accuracy and bringing quicker diagnoses for patients alike. Through the use of deep learning, lesions, and anomalies can be detected more quickly and accurately with reduced human bias in the assessment process, leading to improved diagnosis rates overall.
Automated systems are also applied to enable automated diagnosis and treatment decisions on medical images, which allow healthcare providers to make decisions more efficiently through less manual interventions. This could lead to improved accuracy as the MRI scan or CT scan can be performed in a timely manner, allowing arduous and complex tasks to be completed faster and with greater error-free efficiency. Furthermore, automation implementation could lead to increased precision in preventive healthcare as it can identify minute abnormalities in medical images that suggest a possible condition or illness before it has the chance to manifest into a more significant and sometimes irreversible problem. The use of automated systems also has ethical implications as it could lead to a ‘black box’ effect where decisions are made without full transparency or knowledge as to how they were reached. This could lead to a deterioration of human-patient relationships in healthcare settings as human-to-human connections are vastly diminished throughout the diagnostic process, resulting in potentially inaccurate decisions as certain factors may be missed due to the absence of an experienced radiologist who may catch anomalies overlooked by an automated process. In addition, the potential disruption of existing norms causes difficulty when trying to hold automated medical processes accountable due to their lack of agency, which due to this uncertainty, can cause dismay among healthcare professionals, thus limiting its ability to be effectively used (Trocin et al., 2021).
Accountability and Trust
Using AI technologies provides enormous potential in terms of accuracy, safety, scalability, cost savings, etc., but using this technology raises questions about potential misuse if it needs to be regulated accordingly. Thus, by ensuring an ethical responsibility towards AI technologies, trust among all stakeholders can be established. Rigid regulation and measures towards the implementation of such technologies are therefore required not only to mitigate usability and reliability but also to allow public transparency during the development and implementation of such systems. As much as humans trust AI systems for decision-making, people should have the freedom to challenge the accuracy and the dependability of machine learning algorithms. Furthermore, decision-making should be independent of a single technology solution or methodology, instead employing multiple points taking into account human judgment when appropriate. This will help in more equitable decision-making, which would encompass more accurate insights on any given circumstance as compared to a singular AI system that may not have taken into consideration any particular nuances of different use cases. Thus, Accountability and Trust towards AI technology can be achieved by both regulating the usage and ensuring that humans are involved in the decision on any usage of advanced machine learning systems.
The privacy concerns associated with using AI technologies involve both patient data privacy when storing data, accessing data, or sharing data, as well as algorithmic privacy when algorithmic decisions result from overfitting data set elements or restrictive training data sets that could exclude certain population groups from proper treatment methods and outcomes. In the context of patient data privacy, the impact of AI on the security of patient medical data cannot be understated. Security measures should be established which enable the protection and management of patient information in a secure, reliable, and reliable manner (Azeez & Van der Vyver, 2019). Such measures could include encryption protocols and access mechanisms to prevent unauthorized access, as well as establishing accountability requirements and guidelines.
In addition, assurance processes need to be implemented in order to ensure the data is accurate and meets regulatory standards. With respect to algorithmic privacy, specific measures should be taken to prevent the overfitting of data sets and ensure that all population groups are represented in a way that is equitable for decision-making. For example, bias mitigation techniques should be employed to ensure that decisions are not based on false correlations or false patterns in the data. Additionally, large-scale datasets must be developed to capture the heterogeneity of different population groups, enabling fair representation and avoiding one-size-fits-all approaches. Furthermore, there needs to be a focus on transparency and independent oversight in the development process to ensure that AI solutions conform to ethical standards. Accordingly, by adhering to solid security practices and rigorous algorithmic validations, not only can patient data privacy be shielded from potential security breaches, but algorithmic models can ensure fairness for all population groups when making decisions about medical imaging diagnostics.
Due to their complex decision-making processes requiring vast amounts of data identification, training datasets preparation, and model-building processes done by expert teams, many people need a basic understanding of how AI solutions work or where they obtain their data from. Therefore, it is of utmost importance that a transparency approach be employed when using AI solutions to provide users with a proper insight into how decisions were taken without compromising the privacy or security of sensitive information that could have gone into making such decisions (Rawal et al., 2021). This includes highlighting the various degrees of certainty present within algorithmic decisions and providing detailed explanations through both technical language and layman’s terms, wherever necessary, to illustrate how each decision was arrived at. Validation of the accuracy of the output results should also be accomplished from an independent source, where possible. Moreover, the process should also involve providing explanations on the model selection and training procedures employed by AI experts to build predictive models, as this could help to address any issues related to data bias or manipulation.
Strategies for Implementation
The implementation of AI technologies requires suitable technology infrastructure, such as powerful computing resources, secure storage systems, and high-speed networking capabilities, to ensure the scalability of the solutions and build user confidence. Such technology infrastructure must be incorporated based on strategic methods to suffice the costs and other requirements. For instance, the decision to embrace cloud computing ought to balance many off-premises dangers, such as data security, with the advantages like scalability and cost savings. Similarly, specific analytics services can often provide more precise information than local computing resources. Furthermore, a sustainable architecture should facilitate the integration of legacy systems using APIs wherever feasible (Slamnik-Kriještorac et al., 2021). Such an arrangement also allows for independent yet concurrent development of components for rapid prototyping and experimentation. Moreover, distributed and decentralized architectures should be employed to scale data-intensive operations without the need for costly upgrades or single-vendor dependence. All in all, well-chosen technology infrastructure is essential for the successful implementation of AI solutions for medical imaging diagnostics.
In order for AI algorithms to efficiently reach accurate decisions from medical images, the data quality going in must be of an acceptable level. This means ensuring that the data is of high quality, that it is appropriately labeled, and that it exposes relevant information to the algorithms in order to avoid over-generalization and overfitting. This can be accomplished through various methods, including providing images with clear delineation of areas and ensuring that there are both positive and negative labels consistent across all training/development datasets. Additionally, it is essential to utilize open datasets such as the NIH Chest X-ray dataset to bring in generalizable and accurate results and inform the algorithms accordingly. Furthermore, there should also be a process to check the accuracy of the training datasets and labels so as to be sure that any algorithms derived from these datasets result in AI systems that are trustworthy and reliable. Finally, AI algorithms should be tested in order for them to accurately recognize and interpret patterns from medical images, leading to more reliable diagnoses and recommendations for patient care.
Appropriate decision processes should also be implemented in order to ensure accuracy and consistency when using AI-based solutions. This includes verifications of the appropriateness and validity of the data being used as a basis for decision-making processes, ensuring the maintenance of data minimization practices, as well as understanding the produced products or system levels of certainty. Moreover, it is advisable for decision-making processes to incorporate both automated decisions made with the help of AI-based solutions and human judgment. This will ensure that any process with automation is validated by taking into account relevant human insight, thus training the AI-based solution to produce optimal results while avoiding ungrounded decisions (Kappen & Naber, 2021). Furthermore, it is recommended that any produced process be continuously monitored and tested to ensure that it is still producing reliable results and validating added AI models precisely. Such a strategy will enable organizations to identify any necessary changes quickly and update their various decision-making processes in a timely manner.
Expertise and Resources
Having an agile skilled team with enough resources, including IT infrastructure and personnel, is necessary to enable speedier concept testing prior to implementation, as this can help reduce development costs and enable faster algorithm improvement feedback loops. This concept of rapid concept testing prior to implementation is founded on the belief that such a process will help to identify any potential challenges that may arise from the implementation of a concept early on, thereby eliminating costly overruns and improving the accuracy of an algorithm. Furthermore, such an approach allows for a more granular understanding of how a concept may impact more sensitive areas, such as health and economic aspects, by creating more realistic scenarios with which to model the effects. With such a method, organizations can more readily observe how an incoming idea may impact various productive and temporal costs and adjust appropriately. Additionally, because of the closer proximity the organization has to the test environment, additional data points can be discerned that could otherwise be missed when the concept is implemented after testing on a broader scale (Ellepola et al., 2021). The ability to gain such invaluable information allows companies to move quicker than their competing counterparts and design new solutions quicker than would have been possible before.
Organizations looking to deploy AI solutions should be mindful of the local, national, and international regulatory frameworks present, which may govern their particular solution or usage domain, as such frameworks provide essential guidance on how businesses may adhere to applicable laws while also utilizing AI technology. Such regulatory considerations may contain provisions concerning data protection, privacy rights, and consumer protection, as well as considering software functionality such as classification and pricing filters. To ensure that the organization has met its obligations with regards to the applicable regulations, existing procedures for privacy or security measures should not only be considered but also adapted into any potential solution with forethought given to discuss these considerations during the architecture planning stages before wider scale deployment. A practical approach to consider when fulfilling regulatory requirements is to consider the legal ramifications of AI usage depending on the product or service being offered by the organization, as such a proactive approach may assist in curbing significant risks compared to reactive actions taken following a violation. Furthermore, organizations may also benefit from developing clear and accurate policies and statements that put into plain language their efforts in empowering customers by educating them on their rights with regard to their personal data.
Existing Literature on Radiological AI Models and Chest X-Ray Analysis
Accuracy and Performance Levels
Research from a study conducted on actual patients with pneumonia showed that compared to radiologists, a CNN model developed to detect pneumonia from chest X-rays displayed an overall accuracy of 86%, surpassing the human counterpart in terms of precision and explainability. The model highlighted the highly diagnostic regions of chest X-rays, thereby allowing clinicians to make an informed diagnosis with a concrete rationale for their decisions. Furthermore, the transparency of decision-making enables researchers to understand how decisions were reached and offers more accountability for the accuracy of such decisions. Nevertheless, X-ray abnormalities are sometimes too subtle for CNN models to recognize (BERLANGA, 2021). In order to ensure accurate precision, clinicians have to make a judgment call after identifying the presence of such abnormalities. This is due to the need for more comprehensive data required by CNN models in order to understand subtle lesions or disease categories. Thus, while CNN models are special and easy-to-use tools, they still depend heavily on human input and expertise in order to account for highly complex clinical scenarios.
Human Evaluations and Guidelines
Human evaluations are often necessary after automated decisions are made in order to separate false positives from true positives and prevent any unnecessary treatments from being prescribed. Guidelines should be tailored to the condition being considered and should use contributions from both practical expertise and radiology knowledge, enabling efficient decision support for quick diagnosis and treatment decisions. Furthermore, more advanced evaluation techniques should be used, such as additional testing and analysis, in order to identify any potential false positives that automated systems may have marked as true positives. To identify false positives, it is necessary to leverage different investigative techniques, including but not limited to reviewing patient-reported outcomes and other supplemental imaging data. Moreover, ongoing auditing should also be utilized in order to measure the accuracy of automated decisions on an ongoing basis, allowing for any errors made by automated systems to be identified and corrected in a timely manner in order to improve patient outcomes. Lastly, it is also essential to consider the context of patient information when making decisions so that decisions made by automated systems can be appropriately evaluated and calibrated accordingly, reducing the rate of false positives. In doing so, automated systems will become more reliable and accurate in the long run, reducing the need for human evaluations.
Governance Framework and Legal Safeguards
Having a governance framework arrangement is prudent for regulating the deployment or usage of such technologies as this provides an avenue for accountability in case something does go wrong, as well as encouraging best practices throughout the adoption cycle – from conception to deployment – in terms of data usage, privacy standards, outcome results interpretations, etc. Moreover, legal advice may need to be sought regarding any potential implication present within specific legal jurisdictions prior to wider-scale deployment.
Education Necessary for Potential Realization of AI Technology
In order to obtain the maximum benefit from using automated decision support systems, it is essential that staff are adequately educated on the technology being used – both in terms of functional knowledge but also ethical implications too – so they can use these systems responsibly without violating trust or compromising privacy standards. It is assumed that operators responsible for running these systems will possess basic IT literacy but access to further specialized training courses can also be beneficial when using more advanced automated systems incorporating deep learning algorithms etc.
The aim of this research was to explore the use and implications of artificial intelligence in chest X-ray reader technology and propose an implementable plan for replacing some degree of human decision-making with AI-based decision-making. The objectives focused on analyzing current approaches, identifying ethical considerations, proposing strategies for implementation, and reviewing existing literature on radiological AI models and chest X-ray analysis.
Revisiting Aim, Objectives, and Outcomes
From this research, the utilization of artificial intelligence technologies in chest X-ray readers presents an excellent potential for boosting accuracy and providing more timely diagnosis, hence addressing the crucial need for finding augmentation of this domain. This progress opens an array of promising opportunities with regard to enhanced diagnosis and confident results. Although this calls for facilitating a robust platform to aid the absorption, administration, as well as deployment of these techniques, incorporating efficient regulations, adapting an ethical framework, and accommodating a secure environment, becomes imperative.
Particularly, ethical considerations such as trust, liability, and privacy must be taken into account while deploying such technologies; particularly when it comes to matters pertaining to sensitive healthcare data. Frameworks must be established to explicitly define the extent to which AI would be integrated into decision-making processes, being mindful of laws enforced by regulatory bodies, significant medical consequences caused by irrational inference, and, finally, accountability of stakeholders. Moreover, establishing trust would enable algorithms to interpret diagnoses better as data will perceive human interpretation rather than engaging in a race for accuracy (Johnson et al., 2021). Such a balance is likely to have the remarkable potential not only to boost accuracy levels and enhance the delivery of medical services but further formulate an all-encompassing ethical blueprint, paving the way for responsible yet efficient deployment of AI.
Key Contributions to Innovation or Change
This research has explored how AI can be used to supplement the reading of chest X-rays, with a particular focus on ethical considerations, strategies for implementation, and existing literature on AI models for chest X-ray analysis, highlighting that by closely examining the relevant ethical considerations and putting in place strategies for safe and responsible implementation, AI technologies can help to enhance accuracy and reduce the time taken by radiologists in diagnosing diseases. Ethical considerations need to be carefully weighed to ensure that AI technologies are used in a responsible and socially acceptable manner; for example, it is essential to ensure that the data used in training the AI models is of high quality and free from bias. Similarly, those creating and managing the AI models must have the appropriate technical expertise as well as sound knowledge of the associated legal and ethical guidelines.
When it comes to developing strategies for implementation, robust testing of the AI models is essential to ensure that they are accurate and reliable. Ensuring that feedback is given to radiologists about any errors that may be identified by the AI models is also essential; this will enable them to become more aware of possible sources of bias or errors in their own readings so they can work on improvement. Additionally, incorporating human judgment into AI-powered decision-making processes can further reduce risks that may arise from potential inaccuracy in the model’s readings. Last but not least, evaluating outcomes of any AI implementation must also be taken into consideration in order to identify any potential issues and areas of improvement.
Development for Leadership Practice
This research emphasizes the importance of carefully assessing the use of AI technologies for medical applications in light of various ethical considerations, which range from data privacy and security issues to risks associated with machine learning, social biases, and other unintended results. Care must be taken when developing suitable strategies for implementation, as well as when deploying any new AI solutions, to ensure that all ethical implications are incorporated into the decision-making process and that existing literature and regulations are thoroughly reviewed and adhered to. Moreover, decision-makers need to understand that when assessing the potential of AI technology for medical applications, it is not enough to merely consider how it might improve accuracy or bring cost savings—it is also essential to consider potential ethical implications in order to comply with all applicable legislations (Amann et al., 2020). As such, medical decision-makers should be mindful of the ethical aspects of using AI technologies in medicine while also seeking to extract maximum benefit from them.
Limitations of Research
The research in this paper, whilst relying heavily on existing literature and academic sources, relies primarily on the existing state of knowledge and research, thereby limiting the scope for further investigation into further nuances or considerations. Whilst specific ethical considerations, such as privacy implications or trust-building, are discussed in the context of AI for medical image diagnostics, there are potential additional implications that may arise from broader usage contexts that have yet to be accounted for due to time or scope constraints. Furthermore, due to the unique complexities inherent in the implementation of real-world AI solutions for clinical decision processes, further research could be conducted to explore such implications (Chua et al., 2021). Ultimately, with a different scope and focus, future investigations should provide further insights into the ethical considerations and real-world complexities arising out of the usage of AI in medicine.
Consequently, it is also essential to consider the security implications of integrating machine learning technologies with private medical data. Given the potential for AI-enabled medical image diagnostics to revolutionize the healthcare industry, there is a need to ensure appropriate protection and handling of such sensitive information. Alongside concern for patient privacy, healthcare providers should also ensure that any AI pre-trained models they employ to facilitate clinical decision processes are trustworthy, reliable, and have been trained on a comprehensive evidence base. Finally, further research into these areas should be conducted so as to ensure that any implementations of AI are in line with applicable laws and regulations regarding data protection and privacy.
Areas for Further Research
Sketching out an overarching governance framework governing the usage of data collected from various sources is an area that could be explored further in order to ensure the responsible deployment of such technologies whilst ensuring privacy standards are maintained. An investigation into specific legal implications present within particular jurisdictions when using AI technologies is also another avenue that requires further research, as these might considerably vary depending on both capability foundations as well as local laws. Moreover, considering healthcare economic elements also warrants further inquiry, primarily through a cost/benefit lens, whilst investigating how AI can replace some more routine decisions with more complex ones left to higher upstream decision makers instead of radiologists in order to prevent potentially concerning conflicts of interests occurring in decision-making processes.
Amann, J., Blasimme, A., Vayena, E., Frey, D., Madai, V. I., & Precise4Q Consortium. (2020). Explainability for artificial intelligence in healthcare: a multidisciplinary perspective. BMC medical informatics and decision making, 20, 1-9.
Azeez, N. A., & Van der Vyver, C. (2019). Security and privacy issues in e-health cloud-based system: A comprehensive content analysis. Egyptian Informatics Journal, 20(2), 97-108.
BERLANGA, R. (2021, October). We are designing Chest X-ray Datasets for Improving Lung Nodules Detection Through Convolutional Neural Networks. In Artificial Intelligence Research and Development: Proceedings of the 23rd International Conference of the Catalan Association for Artificial Intelligence (Vol. 339, p. 345). IOS Press.
Chua, I. S., Gaziel‐Yablowitz, M., Korach, Z. T., Kehl, K. L., Levitan, N. A., Arriaga, Y. E., … & Hassett, M. (2021). Artificial intelligence in oncology: Path to implementation. Cancer Medicine, 10(12), 4138-4149.
Haluza, D., & Jungwirth, D. (2023). Artificial Intelligence and Ten Societal Megatrends: An Exploratory Study Using GPT-3. Systems, 11(3), 120.
Ellepola, G., Pie, M. R., Pethiyagoda, R., Hanken, J., & Meegaskumbura, M. (2022). The role of climate and islands in species diversification and reproductive-mode evolution of Old World tree frogs. Communications Biology, 5(1), 347.
He, Z., Chen, Z., Tan, M., Elingarami, S., Liu, Y., Li, T., … & Li, W. (2020). A review on methods for diagnosis of breast cancer cells and tissues. Cell proliferation, 53(7), e12822.
Irani, Z., Abril, R. M., Weerakkody, V., Omar, A., & Sivarajah, U. (2022). The impact of legacy systems on digital transformation in European public administration: Lesson learned from a multi case analysis. Government Information Quarterly, 101784.
Johnson, K. B., Wei, W. Q., Weeraratne, D., Frisse, M. E., Misulis, K., Rhee, K., … & Snowdon, J. L. (2021). Precision medicine, AI, and the future of personalized health care. Clinical and translational science, 14(1), 86-93.
Kappen, M., & Naber, M. (2021). Objective and bias-free measures of candidate motivation during job applications. Scientific Reports, 11(1), 21254.
Kavitha, R., Jothi, D. K., Saravanan, K., Swain, M. P., Gonzáles, J. L. A., Bhardwaj, R. J., & Adomako, E. (2023). Ant Colony Optimization-Enabled CNN Deep Learning Technique for Accurate Detection of Cervical Cancer. BioMed Research International, 2023.
Noguerol, T. M., Paulano-Godino, F., Martín-Valdivia, M. T., Menias, C. O., & Luna, A. (2019). Strengths, weaknesses, opportunities, and threats analysis of artificial intelligence and machine learning applications in radiology. Journal of the American College of Radiology, 16(9), 1239-1247.
Rawal, A., McCoy, J., Rawat, D. B., Sadler, B. M., & Amant, R. S. (2021). Recent Advances in Trustworthy Explainable Artificial Intelligence: Status, Challenges, and Perspectives. IEEE Transactions on Artificial Intelligence, 3(6), 852-866.
Slamnik-Kriještorac, N., Kremo, H., Ruffini, M., & Marquez-Barja, J. M. (2020). Sharing distributed and heterogeneous resources toward end-to-end 5G networks: A comprehensive survey and a taxonomy. IEEE Communications Surveys & Tutorials, 22(3), 1592-1628.
Trocin, C., Mikalef, P., Papamitsiou, Z., & Conboy, K. (2021). Responsible AI for digital health: a synthesis and a research agenda. Information Systems Frontiers, 1-19.