After 3 years, folks are still trying to circumnavigate Sony's hypervisor control over the graphics port on the PS3, so those of us that run Linux on it cannot get accelerated graphics or GL performance on it.
Apparently for them, Utopia's castle walls are still standing.
On a sunny day (Sun, 10 Aug 2008 15:02:40 GMT) it happened Jan Panteltje wrote in :
And for the others: Sony was to have two HDMI ports on the PS3, should have made for interesting experiments.
But the real PS3 only had one, so I decided to skip on the Sony product (most Sony products I have bought in the past were really bad actually). And Linux you can run on anything (and runs on anything), for less then the cost of a PS3 you can assemble a good PC, so if you must run Linux why bother tortuing yourself on a PS3? Use a real computer.
But perhaps if you are one of those gamers... well the video modes also suck on that thing. And the power consumption is high, not green at all, and it does not have that nice Nintendo remote. :-)))))))))))))))))))))))))))))))))))))))))
That's just the problem - programmers have been so good at hiding the limitations of poorly designed hardware that the whole world thinks that hardware must be perfect and needs no attention other than making it go faster.
If you look at some modern i/o device architectures, it's obvious the hardware engineers never gave a second thought about how the thing would be programmed efficiently...
On a sunny day (Sun, 10 Aug 2008 17:05:31 +0000) it happened ChrisQ wrote in :
Interesting. For me, I have a hardware background, but also software, the two came together with FPGA, when I wanted to implement DES as fast as possible. I did wind up with just a bunch of gates and 1 clock cycle, so no program :-) No loops (all unfolded in hardware). So, you need to define some boundary between hardware resources (that one used a lot of gates), and software resources, I think.
What does C have to do with it, other than being a contributor to the chaos that modern computing is? More big programming projects fail than ever make it to market. OS's are commonly shipped with hundreds or sometimes thousands of bugs. Serious damage to consumers, business, and US national security has been compromised through the criminally stupid design of Windows. Lots of people are refusing to upgrade their apps because the newer releases are bigger, slower, and more fragile than the older ones. In products with hardware, HDL-based logic, and firmware, it's nearly always the firmware that's full of bugs. If engineers can write bug-free VHDL, which they usually do, why can't programmers write bug-free C, which they practically never do?
Things are broken, and we need a change. Since hardware works, and software doesn't, we heed more of the former with more control over less of the latter. Fortunately, that *will* happen, and multicore is one of the drivers.
systems they never even
with even
and your chance to learn something
did some work on
found an error in the
learned that.
systems, and, since I actually
after he writes a demo, or even
I have stated no theories. I have observed that the number of cores per CPU chip is increasing radically, that Moore's law has repartitioned itself away from raw CPU complexity and speed into multiple, relatively modest processors. That this is happening across the range of processors, scientific and desktop and embedded. Are you denying that this is happening?
If not, do you have any opinions on whether having hundreds of fairly fast CPUs, instead of one blindingly-fast one, will change OS design? Will it change embedded app design?
If you have no opinions, and can conjecture no change, why do you get mad at people who do, and can? Why do you post in a group that has "design" in its name? Maybe you should start and moderate sci.electronics.tradition.
On a sunny day (Sun, 10 Aug 2008 10:32:17 -0700) it happened John Larkin wrote in :
in the same
device
kernel, is very academic John,
monolitic.
module
depends on,
new driver.
on a different core (one for each device???)
processing, and
solve everything.'.
and applications,
systems they never even
with even
and your chance to learn something
did some work on
found an error in the
learned that.
systems, and, since I actually
after he writes a demo, or even
HI JOHN elwctronics design is not (!= in C ;-) ) software design. Just stating there will be more cores on a chip is obvious, we knew that for years.
Stating that more cores will improve _reliability_ (in the widest sense of the word) as you seem to (at least that is what I understand from your postings), puts the burden of proof on you.
You call software bad, yet you claim your own small asm programs are perfect, this makes one suspicious.
There is a lot of good software, I would say that software that does what it is intended to do, and does that without crashing, is good software. If that software runs on good hardware you can do a lot with it. All the problems with MS operating system are alien to me, the last MS OS I bought was win98SE, I still have it on a PC, and it does occasionally misbehave, use it for my Canon scanner, and DVD layout sometimes. I will not go online with it..... All other things run various versions / distributions of Linux, think I have tried most of these, all but RatHead worked OK.
So I do not really see your problem, things do not crash, the soft I wrote myself does not crash, things do not get infected with trojans, virusses, worms, other things... I have a very good firewall (iptables), latest DNS fixes, this server has now been running since 2004, still with the same Seagate harddisk...
What is your problem? As to computer languages, the portability of C will help you out big time once you want to run that same stable application on say a MIPS platform, or any other processor. Re-writing your code in ASM for each new platform is asking for bugs, so C is an universal solution. Especially for more complex programs. AND operating systems.
On a sunny day (Sun, 10 Aug 2008 10:46:26 -0700) it happened AnimalMagic wrote in :
RS232 terminal dummy.
formatting link
Actually, I have the RS232 disconnected, as all is working so well. I use telnet to access the wap server, it is faster, and does not interfere with normal operations. It allows me to start and stop processes, get logfiles, set wireless on and off, etc. Screenshot simple telnet session to the wap server from an other PC: ftp://panteltje.com/pub/wap.gif
That's not what happened. They hired David Cutler from DEC, where he had worked on VMS, and pretty much left him alone. The chaos was and is part of the culture of modern programming.
I've snipped the rest of your drivel, Jan, because the above says it all.
Before you make even more of an ass of yourself, why not actually to a Google search on 'monolithic kernel' to get some idea of how it's actually defined by those who know what they're talking about?
On a sunny day (Sun, 10 Aug 2008 15:25:00 -0400) it happened Bill Todd wrote in :
in the same
device
kernel, is very academic John,
monolitic.
So I did, and the things is not so sharply bounded as it may seem. You can have a device driver in user space in Linux too, I have done that too. I am referring to
formatting link
I agree my definition of monolitic is slightly different from yours and wikipedia. If you know what you are talking about I dunno, maybe you do.
On a sunny day (Sun, 10 Aug 2008 12:22:17 -0700) it happened John Larkin wrote in :
QNX has just made their sources public, you still need a license for commercial use though. Are you thinking that way? In the eighties I worked with somebody who really liked QNX, I was into Unix. Unix in the form of Linux solves a lot of programming problems, but brought the real-time problem of task switching breaking some MSDOS like apps. So in a way required more hardware.
Why spend all those millions, we _have_ Linux, it works, and - it is mostly written in C -, and because of that relatively easy portable to many platforms.
While there is a minor amount of boundary fuzzing at the edges (for example, NT originally claimed to be microkernel- or at least 'hybrid kernel'-based due to its use of separate processes to handle security and the 'personalities'), NT and its descendants are basically still monolithic kernels (and Linux doesn't even have that fig-leaf to hide behind: it's monolithic, period).
That is not, however, what you were talking about: you were talking about Linux kernel modules, which are part of the monolithic Linux kernel despite being nicely modularized and loadable on demand (not that Linux is anything special in having loadable-on-demand drivers, you understand: for example, dynamically-loadable drivers were designed into NT's first release in 1993, and the kernel proper - while monolithic - was definitely modular).
That diagram clearly defines the difference between monolithic kernels and microkernels to be based on where the protection domain boundaries fall, not modularity.
ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here.
All logos and trade names are the property of their respective owners.