We propose a programming paradigm for robotics that has the potential to drastically facilitate robotic programming. Building up on Sikuli, a GUI automation language, we abstract specific robotic perception and control capabilities into first-class objects that are embedded in a simple scripting language. Currently, robotics programming requires a deep understanding of perception, controls and algorithms, knowledge of a specific robot's perception capabilities and kinematics, and finally a substantial amount of software engineering. Although learn-by-demonstration allows also relatively unskilled users to adapt a robot to their needs, this approach is intrinsically limited by the complexity such a program can reach. This paper presents a proof-of-concept for migrating Sikuli from the virtual GUI workspace of computer software to the physical 3D workspace of robotics. It then presents an example use case that illustrates the power of this new approach using a simple script that arranges a set of randomly aligned blocks into a tower using a Baxter robot equipped with an Asus Xtion Pro.