raunaqmb 5 hours ago

Hello! Wanted to share our new work on AnySkin -- a new tactile sensor. Most recent developments in robotics continue to ignore touch: but AnySkin has the potential to change that.

Our most exciting result: Learned visuotactile policies for precise tasks like inserting USBs and credit card swiping, that work out-of-the-box when you replace skins! To the best of our knowledge, this has never been shown before with any existing tactile sensor.

Why is this important? For the first time, you could now collect data and train models on one sensor and expect them to generalize to new copies of the sensor -- opening the door to the kind of large foundation models that have revolutionized vision and language reasoning.

Would love to hear the community's questions, thoughts and comments!