Journal of Clinical Medicine Research, ISSN 1918-3003 print, 1918-3011 online, Open Access |
Article copyright, the authors; Journal compilation copyright, J Clin Med Res and Elmer Press Inc |
Journal website https://www.jocmr.org |
Review
Volume 15, Number 8-9, September 2023, pages 391-398
Beyond Human Limits: Harnessing Artificial Intelligence to Optimize Immunosuppression in Kidney Transplantation
Table
Model (referenced to the studies where used) | Description | Advantages | Disadvantages |
---|---|---|---|
AI: artificial intelligence. | |||
Artificial neural network (ANN) [7, 8] | A computer model that mimics the structure and function of the human brain. ANNs are made up of interconnected nodes, called neurons, that process information in a similar way to biological neurons. ANNs can be used to solve a wide variety of problems, including classification, regression, and forecasting. | Can learn complex relationships between variables | Computationally expensive to train |
Handles large amounts of data | Difficult to interpret | ||
Can be used to make predictions without a priori knowledge of the problem domain | Prone to overfitting | ||
Computerized dosing (BestDose Sofware) [4] | A software program that uses AI to calculate and recommend the optimal dose of medication for a patient. BestDose Software takes into account the patient’s individual characteristics, such as age, weight, kidney function, and other medications they are taking, to calculate a safe and effective dose. | Personalized dosing recommendations | Dependence on accurate input data |
Reduces medication errors | May not account for rare or unusual cases | ||
Considers multiple patient factors | Initial setup and integration can be time-consuming | ||
Intelligent dosing system (IDS) [5] | Broader term for a system that uses AI to calculate and recommend medication doses. IDS can include computerized dosing software, as well as other systems that use AI to make decisions about patient care. | Offers a holistic approach to dosing decisions | Can be expensive to implement and maintain |
Can incorporate various AI models and data sources | May require specialized training to use | ||
May not be suitable for all patients | |||
Regression tree (RT) [7] | A type of decision tree that is used to predict a continuous value, such as the price of a house or the number of customers who will visit a store on a given day. RTs work by splitting the data into subsets based on the values of the input variables, and then predicting the output value for each subset. | Simple and interpretable | Prone to overfitting with deep trees |
Handles non-linear relationships | Less accurate than some complex models for certain tasks | ||
Can be used for both regression and classification | Limited modeling power for highly complex data | ||
Multivariate adaptive regression splines (MARS) [7] | A type of non-linear regression model that can be used to model complex relationships between variables. MARS works by combining a set of linear splines to create a more flexible model. | Flexibility in capturing complex relationships | May require larger datasets for accurate modeling |
Automatic feature selection | Complexity in model interpretation | ||
Effective for data with interactions | Sensitive to noisy data | ||
Boosted regression tree (BRT) [7] | A type of ensemble learning model that combines the predictions of multiple regression trees to produce a more accurate prediction. BRTs are often used for regression tasks, such as predicting the price of a house or the number of customers who will visit a store on a given day. | Improved prediction accuracy | Computationally intensive and may require more time |
Handles complex relationships and interactions | Sensitive to noisy data | ||
Robust against overfitting | Requires careful tuning of hyperparameters | ||
Support vector regression (SVR) [7] | A type of regression algorithm that uses support vectors to find a hyperplane that best fits the data. SVRs are often used for regression tasks, such as predicting the price of a house or the number of customers who will visit a store on a given day. | Effective for high-dimensional data | Choice of kernel function affects performance |
Can handle non-linear relationships | May be sensitive to outliers | ||
Robust against overfitting | Can be computationally demanding for large datasets | ||
Random forest regression (RFR) [7] | A type of ensemble learning model that combines the predictions of multiple regression trees to produce a more accurate prediction. RFRs are often used for regression tasks, such as predicting the price of a house or the number of customers who will visit a store on a given day. | High prediction accuracy | Lack of transparency and interpretability |
Handles complex relationships and interactions | Computationally intensive for large forests | ||
Robust against overfitting | Can become biased towards dominant features | ||
Lasso regression (LAR) [7] | A type of regression algorithm that uses L1 regularization to shrink the coefficients of the model. This helps to prevent overfitting and improve the accuracy of the model. LAR is often used for regression tasks, such as predicting the price of a house or the number of customers who will visit a store on a given day. | Feature selection through coefficient shrinkage | May not perform well with highly correlated features |
Helps prevent overfitting | Sensitive to the choice of regularization strength | ||
Simplicity and interpretability | Limited for complex non-linear relationships | ||
Bayesian additive regression trees (BART) [7] | A type of ensemble learning model that combines the predictions of multiple regression trees to produce a more accurate prediction. BARTs are similar to random forests, but they use a Bayesian approach to learning. This can lead to more accurate predictions, especially for small datasets. | Improved prediction accuracy | Computational complexity can be high |
Incorporates uncertainty through Bayesian framework | Requires careful hyperparameter tuning | ||
Suitable for small datasets | May be challenging to implement for large datasets | ||
Multilayer perceptron (MLP) [9] | A type of artificial neural network that consists of multiple layers of interconnected neurons. MLPs are often used for classification and regression tasks. | Suitable for complex, non-linear relationships | Prone to overfitting without proper regularization |
Can handle large datasets | Requires a large amount of data for training | ||
Can learn intricate patterns | May be computationally demanding for deep networks | ||
Finite impulse response (FIR) [9] | A type of filter that is used to process signals. FIR filters are linear and time-invariant, and they have a finite number of taps. FIR filters are often used in signal processing applications, such as audio processing and image processing. | Linear and time-invariant characteristics | Limited ability to handle dynamic systems |
Precise control over filter response | May require a large number of coefficients for complex filters | ||
Suitable for real-time processing | Not suitable for all signal processing tasks | ||
Elman [9] | A type of recurrent neural network that is used to process sequential data. Elman networks have a context layer that stores the outputs of previous neurons. This allows the network to learn long-term dependencies in the data. Elman networks are often used in applications such as natural language processing and machine translation. | Effective for modeling sequential data Captures long-term dependencies | Complex architecture and training |
Suitable for tasks with temporal patterns | Sensitive to the choice of hyperparameters | ||
Limited performance on some complex tasks | |||
Adaptive-network-based fuzzy inference system (ANFIS) [10] | A type of hybrid intelligent system that combines fuzzy logic and artificial neural networks. ANFIS systems can be used to model complex systems and make predictions. ANFIS systems are often used in applications such as control systems and forecasting. | Combines the strengths of fuzzy logic and neural networks | Requires expert knowledge for rule generation |
Effective for modeling complex and uncertain systems | Complexity in rule optimization | ||
Provides interpretability through fuzzy rules | Performance highly dependent on the quality of rules | ||
XgBoost [12] | A type of ensemble learning model that combines the predictions of multiple decision trees to produce a more accurate prediction. XgBoost is a very powerful algorithm that can be used to solve a wide variety of machine learning problems. XgBoost is often used for classification and regression tasks. | High prediction accuracy | Lack of transparency and interpretability |
Handles complex relationships and interactions | Can be sensitive to noisy data | ||
Robust against overfitting | Requires careful tuning of hyperparameters | ||
Efficient training and prediction |