This talk is built upon on our vision of human-machine data exploration, which integrates interactive machine learning and interactive visualization to learn about and from large unstructured data in a joint fashion. The goal thereby is to combine the strengths of machine learning and human analytical skills into a unified process that helps users to "detect the expected and discover the unexpected" in large domain-specific data sets. In particular, in this talk, I will focus on scalable visual model inspection to help users analyze how the machine's understanding of a large unstructured data set aligns with their own. I will present a new multi-scale method for visual inspection of models based on data sets with hundreds of thousands of data instances and thousands of labels. Furthermore, I will present the results of a study analyzing how the data, model, and visualization characteristics influence users model inspection performance. Finally, I will conclude with a discussion about the challenges of visual model inspection, its evaluation, and its role in human-machine data exploration.