Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

General General

Technology-related knowledge, skills, and attitudes of pre- and in-service teachers: The current situation and emerging trends.

In Computers in human behavior ; h5-index 125.0

This is the introductory article for the special issue "Technology-related knowledge, skills, and attitudes of pre- and in-service teachers". It (1) specifies the concept of technology-related knowledge, skills, and attitudes (KSA) of teachers, (2) presents how these KSA are currently assessed, and (3) outlines ways of fostering them among pre- and in-service teachers. The eight articles in the special issue are structured accordingly, and we demonstrate how they contribute to knowledge in these three areas. Moreover, we show how the afterword to the special issue widens the perspective on technology integration by taking into account systems and cultures of practice. Due to their quantitative empirical nature, the eight articles investigate technology at the current state of the art. However, the potential of artificial intelligence has not yet been fully exploited in education. We provide an outlook on potential developments and their implications on teachers' technology-related KSA. To this end, we introduce the concept of augmentation strategies.

Seufert Sabine, Guggemos Josef, Sailer Michael

2020-Sep-06

And attitudes, Artificial intelligence, Augmentation strategies, Knowledge, Professional development for teachers, Skills, TPACK, Technology

General General

A deep learning approach to detect Covid-19 coronavirus with X-Ray images.

In Biocybernetics and biomedical engineering

Rapid and accurate detection of COVID-19 coronavirus is necessity of time to prevent and control of this pandemic by timely quarantine and medical treatment in absence of any vaccine. Daily increase in cases of COVID-19 patients worldwide and limited number of available detection kits pose difficulty in identifying the presence of disease. Therefore, at this point of time, necessity arises to look for other alternatives. Among already existing, widely available and low-cost resources, X-ray is frequently used imaging modality and on the other hand, deep learning techniques have achieved state-of-the-art performances in computer-aided medical diagnosis. Therefore, an alternative diagnostic tool to detect COVID-19 cases utilizing available resources and advanced deep learning techniques is proposed in this work. The proposed method is implemented in four phases, viz., data augmentation, preprocessing, stage-I and stage-II deep network model designing. This study is performed with online available resources of 1215 images and further strengthen by utilizing data augmentation techniques to provide better generalization of the model and to prevent the model overfitting by increasing the overall length of dataset to 1832 images. Deep network implementation in two stages is designed to differentiate COVID-19 induced pneumonia from healthy cases, bacterial and other virus induced pneumonia on X-ray images of chest. Comprehensive evaluations have been performed to demonstrate the effectiveness of the proposed method with both (i) training-validation-testing and (ii) 5-fold cross validation procedures. High classification accuracy as 97.77%, recall as 97.14% and precision as 97.14% in case of COVID-19 detection shows the efficacy of proposed method in present need of time. Further, the deep network architecture showing averaged accuracy/sensitivity/specificity/precision/F1-score of 98.93/98.93/98.66/96.39/98.15 with 5-fold cross validation makes a promising outcome in COVID-19 detection using X-ray images.

Jain Govardhan, Mittal Deepti, Thakur Daksh, Mittal Madhup K

Computer-aided diagnosis, Coronavirus detection, Covid-19, Deep learning, Pneumonia, X-ray

Ophthalmology Ophthalmology

Dual-input convolutional neural network for glaucoma diagnosis using spectral-domain optical coherence tomography.

In The British journal of ophthalmology

BACKGROUND/AIMS : To evaluate, with spectral-domain optical coherence tomography (SD-OCT), the glaucoma-diagnostic ability of a deep-learning classifier.

METHODS : A total of 777 Cirrus high-definition SD-OCT image sets of the retinal nerve fibre layer (RNFL) and ganglion cell-inner plexiform layer (GCIPL) of 315 normal subjects, 219 patients with early-stage primary open-angle glaucoma (POAG) and 243 patients with moderate-to-severe-stage POAG were aggregated. The image sets were divided into a training data set (252 normal, 174 early POAG and 195 moderate-to-severe POAG) and a test data set (63 normal, 45 early POAG and 48 moderate-to-severe POAG). The visual geometry group (VGG16)-based dual-input convolutional neural network (DICNN) was adopted for the glaucoma diagnoses. Unlike other networks, the DICNN structure takes two images (both RNFL and GCIPL) as inputs. The glaucoma-diagnostic ability was computed according to both accuracy and area under the receiver operating characteristic curve (AUC).

RESULTS : For the test data set, DICNN could distinguish between patients with glaucoma and normal subjects accurately (accuracy=92.793%, AUC=0.957 (95% CI 0.943 to 0.966), sensitivity=0.896 (95% CI 0.896 to 0.917), specificity=0.952 (95% CI 0.921 to 0.952)). For distinguishing between patients with early-stage glaucoma and normal subjects, DICNN's diagnostic ability (accuracy=85.185%, AUC=0.869 (95% CI 0.825 to 0.879), sensitivity=0.921 (95% CI 0.813 to 0.905), specificity=0.756 (95% CI 0.610 to 0.790)]) was higher than convolutional neural network algorithms that trained with RNFL or GCIPL separately.

CONCLUSION : The deep-learning algorithm using SD-OCT can distinguish normal subjects not only from established patients with glaucoma but also from patients with early-stage glaucoma. The deep-learning model with DICNN, as trained by both RNFL and GCIPL thickness map data, showed a high diagnostic ability for discriminatingpatients with early-stage glaucoma from normal subjects.

Sun Sukkyu, Ha Ahnul, Kim Young Kook, Yoo Byeong Wook, Kim Hee Chan, Park Ki Ho

2020-Sep-12

Glaucoma, Imaging, Macula, Optic Nerve

General General

New interpretable deep learning model to monitor real-time PM2.5 concentrations from satellite data.

In Environment international

Particulate matter with a mass concentration of particles with a diameter less than 2.5 μm (PM2.5) is a key air quality parameter. A real-time knowledge of PM2.5 is highly valuable for lowering the risk of detrimental impacts on human health. To achieve this goal, we developed a new deep learning model-EntityDenseNet to retrieve ground-level PM2.5 concentrations from Himawari-8, a geostationary satellite providing high temporal resolution data. In contrast to the traditional machine learning model, the new model has the capability to automatically extract PM2.5 spatio-temporal characteristics. Validation across mainland China demonstrates that hourly, daily and monthly PM2.5 retrievals contain the root-mean-square errors of 26.85, 25.3, and 15.34 μg/m3, respectively. In addition to a higher accuracy achievement when compared with various machine learning inversion methods (backpropagation neural network, extreme gradient boosting, light gradient boosting machine, and random forest), EntityDenseNet can "peek inside the black box" to extract the spatio-temporal features of PM2.5. This model can show, for example, that PM2.5 levels in the coastal city of Tianjin were more influenced by air from Hebei than Beijing. Further, EntityDenseNet can still extract the seasonal characteristics that demonstrate that PM2.5 is more closely related within three month groups over mainland China: (1) December, January and February, (2) March, April and May, (3) July, August and September, even without meteorological information. EntityDenseNet has the ability to obtain high temporal resolution satellite-based PM2.5 data over China in real-time. This could act as an important tool to improve our understanding of PM2.5 spatio-temporal features.

Yan Xing, Zang Zhou, Luo Nana, Jiang Yize, Li Zhanqing

2020-Sep-10

Deep learning, Himawari-8, PM(2.5), Satellite

General General

Who Gets Credit for AI-Generated Art?

In iScience

The recent sale of an artificial intelligence (AI)-generated portrait for $432,000 at Christie's art auction has raised questions about how credit and responsibility should be allocated to individuals involved and how the anthropomorphic perception of the AI system contributed to the artwork's success. Here, we identify natural heterogeneity in the extent to which different people perceive AI as anthropomorphic. We find that differences in the perception of AI anthropomorphicity are associated with different allocations of responsibility to the AI system and credit to different stakeholders involved in art production. We then show that perceptions of AI anthropomorphicity can be manipulated by changing the language used to talk about AI-as a tool versus agent-with consequences for artists and AI practitioners. Our findings shed light on what is at stake when we anthropomorphize AI systems and offer an empirical lens to reason about how to allocate credit and responsibility to human stakeholders.

Epstein Ziv, Levine Sydney, Rand David G, Rahwan Iyad

2020-Aug-29

Artificial Intelligence, Computer Science, Economics

Radiology Radiology

Image-level detection of arterial occlusions in 4D-CTA of acute stroke patients using deep learning.

In Medical image analysis

The triage of acute stroke patients is increasingly dependent on four-dimensional CTA (4D-CTA) imaging. In this work, we present a convolutional neural network (CNN) for image-level detection of intracranial anterior circulation artery occlusions in 4D-CTA. The method uses a normalized 3D time-to-signal (TTS) representation of the input image, which is sensitive to differences in the global arrival times caused by the potential presence of vascular pathologies. The TTS map presents the time within the cranial cavity at which the signal reaches a percentage of the maximum signal intensity, corrected for the baseline intensity. The method was trained and validated on (n=214) patient images and tested on an independent set of (n=279) patient images. This test set included all consecutive suspected-stroke patients admitted to our hospital in 2018. The accuracy, sensitivity, and specificity were 92%, 95%, and 92%. The area under the receiver operating characteristics curve was 0.98 (95% CI: 0.95- 0.99). These results show the feasibility of automated stroke triage in 4D-CTA.

Meijs Midas, Meijer Frederick J A, Prokop Mathias, Ginneken Bram van, Manniesing Rashindra

2020-Sep-05

4D-CTA, Convolutional Neural Networks, Deep Learning, Stroke