Browsing by Author "Lobel, Hans"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
- ItemAn automatic methodology to measure drivers' behavior in public transport(2024) Catalan, Hernan; Lobel, Hans; Herrera, Juan CarlosThe way in which public transport buses are driven has an influence in users'perception and satisfaction with the service. Bus driver's behavior is usually obtained surveying passengers and/or using the mystery passenger method, not necessarily allowing for an objective and continuous evaluation. In this work, we introduce a novel methodology to automatically classify drivers' behavior in a more consistent and objective manner, based on data from inertial measurement units, and machine learning techniques. By substituting human evaluators with automatic data collection and classification algorithms, we are able to reduce the subjectivity and cost of the current methodology, while increasing sample size. Our approach is based on three components: i) data capture using inertial measurement units (e.g. mobile devices), ii) carefully tuned classifiers that deal with sample imbalance problems, and iii) an interpretable scoring system. Results show that collected data captures several types of undesirable maneuvers, providing a rich information to the classification process. In terms of categorization performance, the evaluated classifiers, namely support vector machines, decision trees and k-NN, deliver high and consistent accuracy after the tuning process, even in the presence of a highly imbalanced sample. Finally, the proposed driver's behavior score shows high discriminative power, effectively characterizing differences between drivers, and providing driver-tailored driving recommendations, that can be generated in specific spots, in order to improve passengers' experience. The resulting methodology can be cost-effectively deployed at a large scale with good performance.
- ItemAutomatic document screening of medical literature using word and text embeddings in an active learning setting(SPRINGER, 2020) Carvallo, Andres; Parra, Denis; Lobel, Hans; Soto, AlvaroDocument screening is a fundamental task within Evidence-based Medicine (EBM), a practice that provides scientific evidence to support medical decisions. Several approaches have tried to reduce physicians' workload of screening and labeling vast amounts of documents to answer clinical questions. Previous works tried to semi-automate document screening, reporting promising results, but their evaluation was conducted on small datasets, which hinders generalization. Moreover, recent works in natural language processing have introduced neural language models, but none have compared their performance in EBM. In this paper, we evaluate the impact of several document representations such as TF-IDF along with neural language models (BioBERT, BERT, Word2Vec, and GloVe) on an active learning-based setting for document screening in EBM. Our goal is to reduce the number of documents that physicians need to label to answer clinical questions. We evaluate these methods using both a small challenging dataset (CLEF eHealth 2017) as well as a larger one but easier to rank (Epistemonikos). Our results indicate that word as well as textual neural embeddings always outperform the traditional TF-IDF representation. When comparing among neural and textual embeddings, in the CLEF eHealth dataset the models BERT and BioBERT yielded the best results. On the larger dataset, Epistemonikos, Word2Vec and BERT were the most competitive, showing that BERT was the most consistent model across different corpuses. In terms of active learning, an uncertainty sampling strategy combined with a logistic regression achieved the best performance overall, above other methods under evaluation, and in fewer iterations. Finally, we compared the results of evaluating our best models, trained using active learning, with other authors methods from CLEF eHealth, showing better results in terms of work saved for physicians in the document-screening task.
- ItemOvercoming Catastrophic Forgetting Using Sparse Coding and Meta Learning(2021) Hurtado, Julio; Lobel, Hans; Soto, AlvaroContinuous learning occurs naturally in human beings. However, Deep Learning methods suffer from a problem known as Catastrophic Forgetting (CF) that consists of a model drastically decreasing its performance on previously learned tasks when it is sequentially trained on new tasks. This situation, known as task interference, occurs when a network modifies relevant weight values as it learns a new task. In this work, we propose two main strategies to face the problem of task interference in convolutional neural networks. First, we use a sparse coding technique to adaptively allocate model capacity to different tasks avoiding interference between them. Specifically, we use a strategy based on group sparse regularization to specialize groups of parameters to learn each task. Afterward, by adding binary masks, we can freeze these groups of parameters, using the rest of the network to learn new tasks. Second, we use a meta learning technique to foster knowledge transfer among tasks, encouraging weight reusability instead of overwriting. Specifically, we use an optimization strategy based on episodic training to foster learning weights that are expected to be useful to solve future tasks. Together, these two strategies help us to avoid interference by preserving compatibility with previous and future weight values. Using this approach, we achieve state-of-the-art results on popular benchmarks used to test techniques to avoid CF. In particular, we conduct an ablation study to identify the contribution of each component of the proposed method, demonstrating its ability to avoid retroactive interference with previous tasks and to promote knowledge transfer to future tasks.