Due to the *digital transformation**, the amount of data
collected is rapidly increasing in many fields of application. With ?Big
Data? available, deviations from simple standard models can usually be
detected, and it becomes tempting to consider more complex models
instead. Despite the increase in computational power, classical
statistical methods such as maximum likelihood and Bayesian inference,
as well as modern simulation based methods (e.g. approximate maximum
likelihood, approximate Bayesian computation, indirect inference), often
reach limits when applied in the context of such complex statistical
models. Often a trade-off has to be found between exploiting most of
the relevant information in the data, and the computational feasibility.
Both a clever algorithmic implementation, and speed improving concepts
(such as importance sampling) can also help to obtain results with
reasonable computational effort.