Google Offers a Million Bucks For a Better Inverter

The only application that I can think for such huge motors is pump storage system, in which water is pumped from a lower lake to a higher lake during the night during cheap electricity and let the water flow down during the day through turbo generators.

Starting such a beast even without load is a big challenge using with soft starter (huge multitap autotransformers), you simply can't slam the motor directly into the net :-).

If there is a unexpected other electric demand during the night, you should drop the pumping power momentarily, but this is not possible with direct connected motors.

With a VFD, you can handle the soft start and also control the pumping power with night time electricity spot prices.

Reply to
upsidedown
Loading thread data ...

The point is that I want this off-the-shelf - not as a home made contraption. When I am wearing my "electronics designer" hat, I could put together such systems (usually by asking our technician to do the work...). But when I am wearing my "IT manager" hat, such hacks are out of the question.

Reply to
David Brown

Cruachan is a pump storage system rated at 440 MW. I don't know how fast the motors can start in pump mode, but in generation mode they can reach full rating in 30 seconds if they are in active standby mode (with compressed air ready to speed up the turbines), or 2 minutes from a cold start.

In pump mode, I don't expect that they run at anything like 440 MW, and start time and efficiency is not going to be as important, but it is certainly possible without VFD's - the station came online in 1965.

Reply to
David Brown

They ought to be! Hawaii is already hitting problems with solar fluctating power taking the grid into overload and trip states:

formatting link

Without pumped storage or electrolytic metal refining as ballast you end up having to disconnect generating capacity to protect the grid.

That is the theory but there are plenty of places where the mains infrastructure is inadequate to move power from where it is generated to where it is needed. In principle solar PV should be a win win at low latitudes since aircon requirements roughly track available insolation.

Regards, Martin Brown

Reply to
Martin Brown

Try a search like

"Industrial PC" "24 V" "Industrial PC" "24V" "Industrial PC" "48 V" "Industrial PC" "48V"

and you will quite a lot of hits.

Reply to
upsidedown

On a sunny day (Mon, 28 Jul 2014 13:26:22 +0300) it happened snipped-for-privacy@downunder.com wrote in :

It is my impression at least my last experience, that 'IT managers' are total morons hiring thousands of people to do work that can be done be ONE good programmer[1], as far as hardware goes... that would be really real sad.

[1]Can be done MUCH BETTER by ONE good programmer.
Reply to
Jan Panteltje

Certainly /some/ IT managers are like that - but I am not :-)

Perhaps because I am a programmer (mainly small embedded systems, but also PC's, servers, embedded Linux), I run our IT department without hiring anyone, and using only about 10-15% of my time. Certainly being able to program has saved us greatly in time, in hardware and in software compared to a "typical" IT manager whose qualifications come from MS or Cisco.

Reply to
David Brown

tly and

8

That article is missing something because it's just not making any sense. I f push comes to shove, the residential solar power can be blocked from feed ing the grid. Almost everywhere now has power company controlled load disco nnects via wireless control, usually the big stuff like electric water heat ers and heat pumps are disconnected during periods of maximal demand to avo id brownouts. They can use the same network to control solar installation g rid feeds, they're just too lazy to get on it.

apacity to

Reply to
bloggs.fredbloggs.fred

I wonder if I'm actually writing in Klingon here - people seem determined to help (and I'm very happy for that attitude), but they seem also to miss my point entirely.

Imagine you are in charge of the server room for a small company. You have between two and ten physical servers - some of which might be hosts for virtual servers. You have UPS's, a backup server or tape drive, perhaps a SAN for disks, a firewall, some network switches, etc. You want to buy your equipment from the "small/medium business" section of Dell, HP, Lenevo, etc. You need to be able to replace broken equipment. The setup should be understandable to others - so that when something goes wrong while you are on holiday, it can be fixed by phone.

This is the sort of setup most companies (or branches) have - whether they manage it themselves or outsource their IT. The last thing they want is some home-made power supply, or industrial PC's (at three times the price, and a fraction of the availability). The second last think they want is 380V DC, which is more dangerous, more expensive, and less convenient than the normal AC.

But if it were possible to buy this sort of equipment from the same manufacturers using 24V or 48V DC, it would save around 10% of the hardware costs and 30% of the electricity costs.

It's that simple.

It cannot be done today - there is /no/ appropriate standard DC supply covering this type of usage. 380V DC for datacenters is no more helpful here than 5V USB for telephones. It could easily be done - all it takes is for a couple of the big suppliers to agree on a voltage and a plug. Designing and integrating the new standard supplies would be a fairly simple process.

Reply to
David Brown

I think that discussion clearly shows that it is technically possible to handle both the transmission and distribution with DC only.

Now it is a question if economics, when the various AC systems are slowly faded out.

Unfortunately political, both nationalistic as protectionists problems might delay accepting global standards.

Reply to
upsidedown

On a sunny day (Mon, 28 Jul 2014 12:46:39 +0200) it happened David Brown wrote in :

I am glad to hear that. recently an other multi million Euro IT 'idea' or project was canceled in the Netherlands, think it was modification of the tax system...

formatting link

203 million Euro to calculate people's taxes... project flopped. Cost more than the taxes that came in... I think a couple of >10 year olds could hack it together in a few weeks holiday these days,.

Look what happened in 0bama care ...

From the viewpoint of creating jobs it is^H^Hwas of course a great project. :-(

Reply to
Jan Panteltje

The problem with this sort of thing is not the IT managers - it's the PHB's or politicians higher up. What these people don't realise is that there is a maximum time a given IT project can take - around 2 years is usually a limit. If a project is expected to take longer, it will never happen - by the time it is deployed, the hardware and software originally specified will usually not be available, and the features will have outgrown the hardware bought at the beginning of the project.

The causes are over-ambition (the new system is supposed to share data across all these different offices), poor real-world specifications (only vague ideas about what it should actually do), little contact with real users with real needs, overly tight hardware and software specifications (such as declaring the PC's will have a particular model of cpu and particular version of a particular OS - making it look like the project is well-specified), and reliance on big contracting "consultants" (IBM, MS, HP, etc.) whose agenda is to sell their hardware and software rather than solve the original problem.

Reply to
David Brown

That "might" has my vote for understatement of the year!

Reply to
David Brown

If you are dependent on those companies "small/medium business" quick support, you need to play by their rules.

For slightly more independence, you need hot (or at least cold) standby to handle those servers. If something fails, there are typically days or weeks to get things fixed.

When the small SCSI entered the market, they were considered flimsy compared to 14" disks. For this reason, various RAIDx standards were created to enhance total system availability.

Prior to that in the 1980s, VAXclusters used both 14" disk mirroring through HSC50/70 as well as multiple CPUs. I worked for one customer for a decade, in which the whole cluster was never booted, while individual CPUs were booted once every 1-2 years for operating system upgrades.

Apparently Microsoft Windows has some cluster support, but I have no practical experience.

If you want to operate your computer room at say 24 or 48 V battery power, at least there are a lot of Ethernet and other industrial hardware capable of doing it on at least 24 Vdc (no experience about

48/60 Vdc).
Reply to
upsidedown

On a sunny day (Mon, 28 Jul 2014 12:56:58 +0200) it happened David Brown wrote in :

Well, I dunno what your 'server' requirement is (as to speed, memory, capacity, cores). But I have run my website from the laptop at times.

18V DC in. The switch here also runs on DC (via an adaptor). Firewall is in the Linux server (iptables). 2 x external 1TB USB harddisk, one powered from 2 USB outputs, the other has: an AC adaptor, so DC too.

The interesting thing in THAT setup is that the 'no break' server runs for up to 3 hours (the laptop battery). Its a dual core... And integrated monitor, easy to use.

So.. couple of laptops, some ebay DC DC converters, some car batteries, solar panels on roof. Its up to you. Windmill....

RTG!!!!

Jipppeeee

Reply to
Jan Panteltje

On a sunny day (Mon, 28 Jul 2014 13:44:55 +0200) it happened David Brown wrote in :

Well, politics assigns the IT managers by selecting the company. The big companies lobby the politicians. Bloat happens, deliberately, that is how F35 was created, to make money, NOT to defend the US.

Top down often (always?) sucks.

Reply to
Jan Panteltje

That's why I want to change their rules!

For my own use, I don't follow their "rules" too closely - I don't go in for expensive "support" contracts, and I avoid everything vendor-specific that I can. That's why a cross-vendor standard is so important.

Yes indeed - for critical components, you have to have spares on hand. And you need a way to get replacement servers up and running quickly if something fails.

RAID is a must - but it should be Linux software raid, rather than a vendor-specific hardware raid card. What happens when the vendor's raid card fails? You've lost all your disks (I know from experience).

Windows servers can be okay if you don't push them - I have one running a couple of Windows-only applications that has had almost no downtime in the last 8 years or so. But for the most part, my servers are Linux.

The level of clustering, HA, redundancy, etc., depends on your needs - there is no point in going over the top.

Until there is a substantial range of COTS servers, switches, etc., that are 24 or 48V, and UPS's that power such devices, then there is little benefit from making part of the server room 24V. The whole point is to avoid the wasteful conversion from UPS battery level up to 220V AC then new converters back down to low voltage DC.

Reply to
David Brown

This is for a professional company, not a garage setup.

So while the processing requirements are modest (being Linux fileservers and application servers - and an iptables firewall/router), as are memory requirements (virtual machines using openvz are vastly more efficient than multiple servers or full virtualisation), it still has to be a reliable setup.

I would avoid USB disks for permanent storage - I have had several failures over the years. Granted, one case was when I knocked the disk off the shelf, but that doesn't happen with proper disks in a rack cabinet.

Reply to
David Brown

Until you have sufficient buying power, you either have to create your

24/7 service organization or buy it from outside (such as the HW vendor).

So you want to create your own 24/7 support group. However, if the

24/7 support is a single person, it is not going to work. I have personal experience about that.

The same risk is with software as well as hardware RAIDs. Any decent organization will make night time backups to a separate physical media. so in the worst case, one day of work is lost.

Very little needs to be done if you know what you are doing.

I have used some Moxa network devices and ethernet/serial converters with DC input from 8 to 30 V or even 60 Volts.

No doubt, there are similar products from dozens of other vendors on the market, why do you not use them ?

Reply to
upsidedown

On a sunny day (Mon, 28 Jul 2014 16:31:06 +0200) it happened David Brown wrote in :

So, no argument against using some laptops.

What is in it is the same, I have a Seagate and a 'Platinum'. The seagate is now many many years old. USB connectors are sometimts flimsy (or Chinese ultra thin USB cables). Problems I have seen with some USB3 chipsets, specifically my Samsung laptop USB3 not working with many USB2 things, could be Linux driver though, As for garage.. Apple comes to mind. I gave you a solution, you do not want a solution. Sobeit

Reply to
Jan Panteltje

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.