In this talk, we propose a methodology for parameter estimation and variable selection in high-dimensional nonlinear mixed-effects models. To address the high-dimensional setting, we consider a regularized maximum likelihood approach based on a Lasso penalty. Model selection relies on the extended Bayesian Information Criterion (eBIC), evaluated along the Lasso regularization path after refitting the model by maximum likelihood on each selected support. In practice, the Lasso-penalized maximum likelihood estimator is computed using a weighted proximal stochastic gradient descent algorithm with an adaptive learning rate. This optimization strategy allows us to handle very general models, including those that do not belong to the curved exponential family. The performance of the proposed procedure is evaluated through an extensive simulation study under various configurations. Finally, we apply the method to a real dataset to address a genetic marker identification problem arising in genomic-assisted selection for plant breeding.