Abstract: With the accelerating discourse about the manifold social, ethical, and other human-cantered impacts of text-to-text and text-to-image AI tools (or toys) like ChatGPT Stable Diffusion, a question arises: To what degree can the mechanisms of massive data processing (textual, visual, auditory) via artificial neural networks still be kept transparent and explainable ("xAI")? This concerns the core ambitions of academic and artistic research as forms of knowledge-driven inquiry. The concept of "deep" machine learning, with its "emergent" properties and irritating artefacts, seems to elude conventional tools of analysis. Is the choice of metaphysical terminology simply a semantic cultural delay against the rapid pace of AI technologies, or does it signal a fundamental challenge and an epistemological irritation within the traditional mind/matter dichotomy? Media archaeology radically insists on opening the black box of AI/ML in a non-anthropocentric way, identifying the technológos of such processes from within the techno-mathematic mechanism itself. But where exactly? Signal or noise? Reason or stochastics?