عنوان فارسی مقاله:
هوش مصنوعی در پزشکی: اکنون در چه جایگاهی هستیم؟
عنوان انگلیسی مقاله:
Artificial Intelligence in Medicine: Where Are We Now?
کلمات کلیدی مقاله:
هوش مصنوعی، یادگیری عمیق، رادیولوژی، آسيب شناسي، چشم پزشکی، پوست، مرور
کلمات کلیدی انگلیسی:
Artificial intelligence – Deep learning – Radiology – Pathology – Ophthalmology – Dermatology – Review.
مناسب برای رشته های دانشگاهی زیر:
مناسب برای گرایش های دانشگاهی زیر:
وضعیت مقاله انگلیسی و ترجمه:
مقاله انگلیسی را میتوانید به صورت رایگان با فرمت PDF با کلیک بر روی دکمه آبی، دانلود نمایید. برای ثبت سفارش ترجمه نیز روی دکلمه قرمز رنگ کلیک نمایید. سفارش ترجمه نیازمند زمان بوده و ترجمه این مقاله آماده نمیباشد و پس از اتمام ترجمه، فایل ورد تایپ شده قابل دانلود خواهد بود.
Applications in Medical Imaging
Plain Film Radiography
Applications in Pathology
Applications in Ophthalmology
Applications in Dermatology
قسمتی از مقاله انگلیسی:
rtificial intelligence (AI) is poised to transform medical practice. AI has been studied in several areas of healthcare and medical practice, including precision medicine, population health, and natural language processing (1). The application of AI to visual tasks, known as computer vision, has generated significant interest within the medical community. As such, AI is believed to be relevant to visuallyorientated specialties such as radiology, pathology, ophthalmology, and dermatology. The fuel behind AI’s development is the availability of large digital datasets; deep learning algorithms use these datasets to train themselves to perform a specific task, such as identifying a lesion in an image. In this article, we review key medical AI studies in the visuallyorientated fields with the aim of illuminating the future landscape of AI-enhanced healthcare. APPLICATIONS IN MEDICAL IMAGING Plain Film Radiography The chest radiograph is the most common imaging examination worldwide, with 2 billion performed per year (2). The popularity of chest radiography is explained by its widespread availability around the world and its utility in the diagnosis of a range of conditions. Furthermore, the availability of labeled images, the currency of AI research, is greatest with chest radiographs. For these reasons, chest radiography has garnered the greatest interest amongst AI researchers and continues to be an active research area. It is fitting to begin the discussion with the data that underpins AI. Among the largest medical AI datasets to date is known as ChestX-ray14. The dataset was released by Wang et al (3) of the National Institutes of Health and consists of 112,120 radiographs from 30,805 unique patients. The images were labeled with 14 conditions, such as emphysema, pulmonary nodules and pneumonia, by four radiologists (three generalists and one thoracic subspecialist). The ground truth was established by a majority vote of the four radiologists. The dataset, originally known as ChestX-ray8, was publicly released in 2017 with the goal of addressing the dearth of labeled data in medical AI research. The dataset is available to use for free, and is still amongst the largest publicly available datasets in the world. However, ChestX-ray14 has its weaknesses as a dataset, which have been well described in online resources (4,5). For example, diagnostic uncertainty permeates the dataset; practicing radiologists will recognize that there is a level of uncertainty with many radiological diagnoses, and this is evident from the ChestX-ray14 dataset. Wang et al. (3) obtained the ground truth by text-mining through radiology reports. Frequently, these reports contained multiple possible diagnoses, likely because the true diagnosis was radiologically uncertain. Additionally, many of the labels overlap with each other radiologically; for instance, pneumonia can have a similar appearance to atelectasis. Furthermore, there is no definitive evidence affirming whether the radiological diagnosis was correct. There are further weaknesses with ChestX-ray14, particularly relating to the establishment of the ground truth. Whilst a more detailed discussion of these weaknesses is outside of the scope of this article, we refer the reader to the following resources for further information on this topic (4,5).