In many fields of application, the amount of data collected is rapidly increasing. With ?Big Data? available, deviations from simple standard models can usually be detected, and it becomes tempting to consider more complex models instead. Despite the increase in computational power, classical statistical methods such as maximum likelihood and Bayesian inference, however, often reach limits when applied in the context of such complex statistical models.New simulation based methods such as approximate Bayesian computation, and indirect inference are often applied for parameter estimation in such situations. For classification purposes, machine learning methods provide popular tools. Often a trade-off has to be found between exploiting most relevant information in the data, and the computational feasibility. Both a clever algorithmic implementation, and speed improving concepts (such as importance sampling) can also help to obtain results with reasonable computational effort.