AnnouncementsMatrixEventsFunnyVideosMusicAncapsTechnologyEconomicsPrivacyGIFSCringeAnarchyFilmPicsThemesIdeas4MatrixAskMatrixHelpTop Subs
1
Other discussionsAdd topics
Comment preview
[-]x0x7
0(+0|0)

I agree. I'm not the usual one to say erturkerjerrbs. But one problem with them is they are too focused on autonomy. They are less effective than a human + ai pair. But these tech companies really want to cut the human out, and are trying to get people to buy the marketing.

Why this matters is that these models are constantly tuned to improve at their most common tasks. The more autonomous it is the less effective it will be at working with you. Eventually you will just be a hinderence.. not because the model is so effective but because it hasn't been tuned for peek human + ai performance, but something else. Then it really will ittooookerjooorebs.

Why buy in on a concept that's less effective just so you can help a company train on cutting you out?

I dream of a model trained on reinforcement learning where the reward is determined by another model (and humans) assessing how effective a human+ai were. Reward the model for whatever makes the net system better, not reducing human value. AI can either enhance human value or reduce it. I don't think it's a given in either direction. It depends on what choices we make at different forks in the road.

Why autonomy for robots but no autonomy for humans. They try so hard to erode our autonomy, and now we find they'll try to hype up autonomy for something else?