bizarre story about Silicon Valley hardware startup

"I'd later learn from some hardware experts that you really want to price goods like mine around 5x the cost to manufacture."

You can Google for information like that in about fifteen seconds it doesn't require hunting down an expert.

"I lost $100,000 over five years on my first halfway successful venture. But as far as I'm concerned, it's been a success."

The author did everything wrong and seems proud of it

Reply to
bitrex
Loading thread data ...

He didn't lose $100K. He lost $100K provided by his relatives, backers, and investors.

How many entrepreneurs got it right the first time? A small scale Kickstarter failure is probably a good educational exercise in preparation for the next startup. However, he seems to have given up and is now working for Adecco (a Google X company):

"My life goal is to use automation to reduce the human cost of living to zero."

A lofty and difficult goal. As automation eliminates jobs at the low end of the wage scale, methinks the displaced workers might not welcome additional automation to help them survive on zero income and social welfare handouts. I suspect that he hasn't thought through the implications of what might happen should he actually achieve his life goal.

Drivel: He mentioned living in Ben Lomond, CA, which is also where I've been living for the past 45(?) years. We may have met at some time in the past as the name seems vaguely familiar.

--
Jeff Liebermann     jeffl@cruzio.com 
150 Felker St #D    http://www.LearnByDestroying.com 
Santa Cruz CA 95060 http://802.11junk.com 
Skype: JeffLiebermann     AE6KS    831-336-2558
Reply to
Jeff Liebermann

"By this time I was drinking pretty heavily every night and smoking a lot of weed."

Not that I've lived a perfectly "clean" life myself, but...ah...y'know.

I see a lot of concern about robotics and AI; IMO the hand-wringing ignores the fact that robots more or less can't do shit except some rather mundane types of repetitive manual labor like part assembly, 3D printing, automatic vacuum cleaners that don't even vacuum too good.

They still can't make a robot which picks fruit successfully last I looked, I saw an automatic pizza-making machine at a trade show once if the look of the finished product was anything to judge by I wouldn't be lining up to put money into it.

AI won't be thinking any deep thoughts anytime soon, robotics and AI can't even currently emulate the behavior of a dung beetle successfully. Or a paramecium for that matter.

In any case I'm not opposed to the idea of a guaranteed minimum income intrinsically, I believe most humans are industrious by nature and desire a purpose/be of use in life and will find something productive to do, and it's far easier to pursue an intellectual pursuit of more value to society than manual labor when not spending 50 hours a week working a menial job.

There will be people who will exploit it, there always are, but they will be a minority and is simply an amortized cost and saves money in the long run. It seems to be entrepreneurs who are more into abusing drink and drugs by the look of thing; random drug tests on welfare/disability recipients in the places that has been tried tend to come back no worse than the population average and often better.

"My girlfriend, who had started the company with me, had long since stopped being my business partner." Right, don't hire 'family' that's a pretty good business lesson to learn, letting business spill over into your personal life is never a good idea (unless you were born super-wealthy already like some political figures we know.)

"I couldn't afford to pay for her living" WTF!?

Reply to
bitrex

Not that there's any money in paramecia.

Of course they're only going to develop AI where the ROI is greater than unity.

If AI tools become self-introspective enough that the incremental cost of developing a given AI system becomes zero or negative, then you will see emulated paramecia, and everything else, including the total displacement of humanity.

It seems unlikely that that will happen, though. The cost of electricity is necessarily finite; even just the cost of developing the hardware and software required to get there, is nonzero and must be amortized over quite a long scale (both in time, let alone over all the businesses and markets that might stand to benefit from such solutions).

It's a long-standing problem in production automation, that material cost is nonzero, and necessarily so.

There is no foreseeable future in which we have Star Trek replicators; even if we had unlimited-with-an-asterisk fusion energy as they supposedly do, the energy-to-matter transformation is still off by three or more orders of magnitude. Obviously then, you wouldn't make a replicator or transporter driven by pure energy, but with a reservoir of all the atoms required plus a very fine-grained 3D printer; but again, the balance is very different and it would never turn out quite the same.

The most ironic part about automation is this: if it is desirable to have ever-smarter AI, then sooner or later, "it" will "am". (Er, "it thinks therefore it am"?) Business owners would wish that their workers can be infinitely compliant, but the fact will remain that middle managers and skilled workers won't go away (in some form or another), and those are positions or skills that require some degree of agency and self-awareness that will demand suitable compensation.

Hmm, is that a justifiable claim? That automation above a certain skill threshold is necessarily sentient, to some arguable degree? Interesting. Well, with so little background on what sentience really means, I suppose that's unfalsifiable... :)

Surely, if nothing else, automating business itself is necessarily something we would consider "sentient" -- at least in the important behaviors related to the operation of a business, if not necessarily in the particulars of implementation, or of fine language skills. It might not pass the conventional Turing test, but no one would notice or care if it's guiding big business at least as well as its human competitors.

Not that human ego would allow a machine to run something so "important" without their final say in things; up until the day they're utterly replaced and disposed of.

But, again, assuming things get to that level. (I would guess we have another century before then. Nothing anyone /here/ has to worry about.)

Minimum income seems to be a good idea, at least in the long term. If the economy continues to grow -- with or without humans -- the best we can hope for is to be supported gratefully by "our machine overlords". Life support (literally, and given the current definition of "life") is, or should be, a fixed per capita cost, in terms of raw materials and energy. The [inflation-adjusted] monetary cost will likely decrease as automation improves, meanwhile the economy continuing to grow means humanity will be a smaller and smaller tax on the system, until it utterly surpasses the output in our heyday and we sit around relaxing on this rock (until such time as we are deemed inconvenient or whatever..).

The hard part is transitioning from here to there. Right now, life support is incredibly expensive (a sizable fraction of world GDP), and rolling out a useful UBI across the world is basically impossible. The best we can hope for, is to get as many people as possible, as productive as possible. That drives the economy up, driving quality-of-life up, and therefore driving the birth rate down*.

After that, the key step is introducing automation without driving more people into poverty: quality of life sinking while the economy continues to grow. That's probably the key stage to introduce a UBI.

*Birth rate correlates overly strongly with infant mortality rate and quality of life -- improve the one, and birth rate actually goes below replacement rate; that's why overpopulation happens in bad places, and sub-replacement rates are happening in some high-GDP countries today.

Tim

--
Seven Transistor Labs, LLC 
Electrical Engineering Consultation and Design 
Website: https://www.seventransistorlabs.com/
Reply to
Tim Williams

n't

I've not noticed humans to be logical or money oriented to that extent. Con sider what was spent getting to the moon, or what kids design as hobby proj ects, or what people do just because it's 'cool' and they want to impress s omeone.

of

is

te

well, no such cool analysis benefitted the space missions so far.

is

en

of

s a

d

I don't think we're very far from a 3d printer that can take our garbage an d print, within many limits, what we want from it. The steps from today's machines are: sort wash - those 2 steps could be done manually at first store & inventory retrieve crush into tiny pieces - suitable shredder technology is well established a larger product design library to choose from - that will occur without in tervention and software that can make somewhat intelligent suggestions re which design s to try when you want something. Not too hard.

In the early days, computer driven business won't be trusted by the owners. But in time it will, if it proves its value. There are already technology products on the market that people use & trust without understanding, and w hich can get them into plenty of trouble. Cars, smartphones, computers etc.

ing

ed

ced

We will. People aren't going to spend years learning business skills if the y don't think they need them. Wariness will disappear as the wary & skilled die out.

re

to

e

pe

rt

a

I can't see any reason why. There's plenty of reason why not.

a

put

we

rt

t a

e

at

the

It's happening. Some countries have a UBI, imperfectly implemented. As weal th improves that will spread. It's cheaper for the rich to pay a UBI than t o have people get what they need to survive by crime, which is highly ineff icient financially.

to

Business owners automating either automated to compete or got wiped out. Pl aces that do prioritise that, such as India, remain poor.

NT

Reply to
tabbypurr

snipped-for-privacy@gmail.com wrote in news: snipped-for-privacy@googlegroups.com:

Like you when you post here. You fail to reach that goal, btw.

Reply to
DecadentLinuxUserNumeroUno

mustn't feed the trolls

Reply to
tabbypurr

Like the Kennedy administration.

Reply to
Tom Del Rosso

The Communists apparently felt the same way, well one of them at least.

Reply to
bitrex

Of course they felt he was incompetent. That's why they put the missiles in Cuba and built the Berlin Wall.

It's amazing how much praise he gets. And what form it takes sometimes. There was a PBS documentary a few years ago in which a supporter said, essentially, "In his first year he screwed up but learned from his mistakes, then in his second year he screwed up but learned from his mistakes, and in his third year he screwed up but learned from his mistakes." He really intended to praise him, but he wasn't listening to the words coming out of his own mouth. It was a naked demonstration of the ability to ignore reality.

Reply to
Tom Del Rosso

Only Tom Del Rosso would be silly enough not to recognise that all humans screw up to some extent. If Kennedy hadn't done anything, he wouldn't have screwed up.

What he did wasn't perfect, but he did learn from those mistakes that he did make.

The contrast with Trump is that Trump screws up all the time and never seems to learn from any of his mistakes - to take an extreme example.

--
Bill Sloman, Sydney
Reply to
bill.sloman

Yeah in that light he sounds a bit like another president I'm familiar with, but JFK wasn't really the proper "type" to get up and scream "HAIL OUR LEADER!" and throw the seig heil to a big poster of so naturally the Right not particularly enamored of him.

Reply to
bitrex

For what it's worth McCain, Romney, and Cruz weren't that type either that's part of why they lost

Reply to
bitrex

The Moon was about giving the Soviets the finger.

International posturing is extremely expensive. Many people are involved in making those decisions (to wit: all of Congress for starters). You can dupe some of them all the time, or many of them some of the time, but not all of them all the time. Not through multiple elections, not for a decades-long space program.

Not that posturing, strategy, is necessarily a net benefit to any player. Such games tend to be a negative-sum game: the more posturing is displayed, the more money is spent. The strongest case being outright war, where your money is spent destroying your opponent's money (or vice versa if you lose). "Money" in this case is applied broadly: political, economic and military power are all ultimately all equivalent, at least from the highest level view.

The high level view is what matters. What "cool" kids do doesn't matter, because they have no economic output.

Or, when they collectively do, you get perverse things, like, well, look at any of the kids toys out this year and shake your head...

It's fortunate that the global trend has been towards peace -- the less war we conduct, the less economic output we waste, and the better we can cooperate with each other to produce cool technology.

Ultimately, if nothing else, people, corporations and countries live and die by the dollar. If they run out of money from enough bad decisions, evolution kicks in, they die a cold death and whatever resources remain get vacuumed up by who/whatever can get them. It's not at all impossible, or even all that unlikely, for a country to make very expensive bad decisions (even when the consequences of those decisions are easy to predict), but the overall trend tends to be self-preservation.

Tim

--
Seven Transistor Labs, LLC 
Electrical Engineering Consultation and Design 
Website: https://www.seventransistorlabs.com/
Reply to
Tim Williams

"Tim Williams" wrote in news:pupn7u$m6c$1 @dont-email.me:

With number two being the war on cannabis.

Reply to
DLUNU

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.