Humans Need Not Apply (Or how automation will take most jobs)


#1

Want to understand why I’m going into Computer Programming and Engineering NOW? Because that’ll be the last holdout for jobs for humanity… maybe.


#2

I am 100% behind automating the jobs of Congress. yeah an AI ruling us is dangerous but at least it won’t be corrupt and self serving. SHODAN 2016!


#3

Well if your curious, you could use this to see what the chances your job will be automated in the near future…

Will a robot take your job?


#4

Wouldn’t that be turning us into Logan’s Run?


#5

This is exciting. Abundance for little effort. The end of work. Great! It’ll be bumpy getting there as the economy (hopefully unsullied by managers who think they can manage it and try because they don’t have any real work left to do) shakes out. Wish I lived on the other side of it.

Or the Matrix or Terminator or… Computers/robots shouldn’t set policy. Of course, I don’t think humans should either or at least not very much.


#6

For your foreseeable lifetime, and mine. And most conveyor jobs, because AI has “evolved” about as quickly as battery technology.

Right now, they have the intelligence, as Michio Kaku says, of retarded cockroaches. By the end of the century, they might have the capacity of a lower animal, like a frog, and a whole list of problems developers will have to be on watch for 24/7, just like with any information system or DBMS.


#7

AI didn’t bring about the corrupt society of Logan’s Run. The corrupt society used The Thinker to help implement its corrupt age-21 euthanasia program.


#8

You just missed the point; AI can only do what humans designed it to do. Their limitations reflect the limitations of the creator.


#9

We need not live that kind of life they did but I would rather a machine kill me off then the folly of humans. At least then there would be some logic behind our genocide. :bored:


#10

[quote=“Seravee, post:9, topic:48476”]
We need not live that kind of life they did but I would rather a machine kill me off then the folly of humans. At least then there would be some logic behind our genocide. :bored:
[/quote]I have an idea on how to fix solvency for Social Security :grin:
It can even be coupled with a %100 reduction in payroll tax!


#11

You’re terrible…

EDIT: I must add I always find it funny in these movies that when an AI(self thinking, self learning) takes over managing society it always ends with the machine going “#$&% it!!” and proceeds to start culling humanity.


#12

[quote=“Seravee, post:11, topic:48476”]
You’re terrible…

EDIT: I must add I always find it funny in these movies that when an AI(self thinking, self learning) takes over managing society it always ends with the machine going “#$&% it!!” and proceeds to start culling humanity.
[/quote]I don’t think most people are comfortable with the idea of computers doing a better job of managing society than humans. But they probably could, as we could just figure out the best metrics and set them to carry everything out. I don’t think it’s even needed to make self actualized computers, and probably wouldn’t be ideal, as self actualization is actually a hindrance to efficiency and precession.


#13

The problem is figuring out the metrics… Not everyone agrees on those. Probably never will.

AI of sufficient intelligence can also be dangerous if they do not understand the same ethics that we do. Consider a stamp-collecting AI that is given the goal of collecting as many stamps as possible for the least amount of money, and is connected to the Internet. A human might expect it to watch Ebay and buy cheap stamps, but it may be easier to just threaten people who have stamps. Heck, it may decide all these humans are useless, and it can recycle the carbon, oxygen, and hydrogen in them to make its own stamps. This is why there’s significant research into so-called “friendly AIs.”


#14

I think those metrics would be simple to determine. What conditions(governmental decisions, policies etc) are best for us to thrive both socially and economically? Let the computer determine all this other than partisan policymakers.


#15

[quote=“Trekky0623, post:13, topic:48476”]
The problem is figuring out the metrics… Not everyone agrees on those. Probably never will.

AI of sufficient intelligence can also be dangerous if they do not understand the same ethics that we do. Consider a stamp-collecting AI that is given the goal of collecting as many stamps as possible for the least amount of money, and is connected to the Internet. A human might expect it to watch Ebay and buy cheap stamps, but it may be easier to just threaten people who have stamps. Heck, it may decide all these humans are useless, and it can recycle the carbon, oxygen, and hydrogen in them to make its own stamps. This is why there’s significant research into so-called “friendly AIs.”
[/quote]Because you’re envisioning some kind of autonomous AI that is designed to think about numerous concepts. I’m saying that it makes far more sense to make simple AIs that are very good at doing specific(simple) things. If you make a stamp buying AI, you only include the logic to BUY. MAKE isn’t even something you stick in there. Thus, there is no problem.


#16

Of course, whoever programs the computer will have some say in that . . .


#17

Not necessarily. If X policy with benefit 80% of the population and there is no other alternative with such a percentage then then X policy is the way to go regardless of whether it is a Conservative or Liberal position. Granted I am making it sound simpler than it really is.


#18

[quote=“Alaska_Slim, post:6, topic:48476”]
For your foreseeable lifetime, and mine. And most conveyor jobs, because AI has “evolved” about as quickly as battery technology.

Right now, they have the intelligence, as Michio Kaku says, of retarded cockroaches. By the end of the century, they might have the capacity of a lower animal, like a frog, and a whole list of problems developers will have to be on watch for 24/7, just like with any information system or DBMS.
[/quote]You are depressing :stuck_out_tongue:

[quote=“Seravee, post:14, topic:48476”]
I think those metrics would be simple to determine. What conditions(governmental decisions, policies etc) are best for us to thrive both socially and economically? Let the computer determine all this other than partisan policymakers.
[/quote]Until you have someone who doesn’t fit the mold and wants to do/be something the computer/society deems unworthy. I don’t think there’s anything simple about determining human preferences. Not only are they different from person to person, they change in the same person over time. This doesn’t make sense to other humans. It won’t make sense to computers.


#19

Accept my AI overlord! Resistance is futile!


#20

[quote=“Seravee, post:19, topic:48476”]
Accept my AI overlord! Resistance is futile!
[/quote]I will accept your AI overlord, only in the event that we get customizable sex robots.
Deal?