Training

I bought a Pi4 and now ...

Where can I get the best training on-line to work with the Pi4 hardware and learn to code for it too ?

I want to make a remote control vehicle with a camera that I can drive around my house. I am handicapped and do not get around well and just would like to see what is going on in other parts of the house.

Reply to
Aoli
Loading thread data ...

Are you wanting to use a kit, or build it from scratch?

Reply to
Andy Burns

Do you know how to:

- use the Linux OS

- any programming languages, and if so which?

- what sort of programs have you written on other computers?

Knowing this stuff will let us provide more relevant help.

--
Martin    | martin at 
Gregorie  | gregorie dot org
Reply to
Martin Gregorie

Thank you for your responses.

I can do from scratch or kit, from scratch I would need some guidance but I do have tools that might be needed except for 3D printing which I do not do.

I am slightly familiar with Linux but of the higher forms of such like Mint with a graphical interface.

I can program in C and a little Java. Python looks pretty straight forward to me. I am proficient programing in Visual Basic Classic (VB6).

What language would I need ?

Mart>

Reply to
Aoli

Reply to
Aoli

Its a good idea to get familiar with bash, the default command language, because you will probably find yourself writing shell scripts to handle system management tasks, backups, etc.

C, Java, Python are good, though I'm uncertain how similar the Windows C support library is to the the C Standard Library, as used in all UNIX/Linux systems. Its over 20 years since I compiled a program under Windows. Java is, of course much the same under any OS.

BASIC is very little used by RPi programmers; most of those who used to write BASIC would now write Python and, in any case only the .NET flavours of BASIC seem to be widely supported on RPis.

Depending how you prefer to learn stuff, the "RaspberryPi User Guide" may help. You may also find "Linux in a Nutshell" useful for understanding how Linux and its scripting language works. It also explains how help systems like 'apropos' and the 'man' command (both used to find utility programs and how to use them) are best used. There are a LOT of these: the UNIX/Linux world works on the principle that a program should do one thing and do it well and that you can do more complex things by connecting a number of these utility programs so that the output of one feeds the next in a 'pipeline'.

O'Reilly books are also good, though some are fairly pricey. That said, if you're going to write a lot of C, then any of the C system programing books for UNIX/Linux are good to have.

It also helps to understand regular expressions (which you may already understand by using them in Java or C programs) because a lot of command line tools and editors use regexes as parameters.

There are a lot of text editors: 'gedit', either naked or run via the lightweight 'geany' IDE is not bad, but some of the others ('emacs', 'microemacs' and 'vi') are a lot more powerful and worth looking at to see which you prefer. I have David Curry's "Unix Systems Programming for SvR4" and wouldn't be without it, though its quite a brick: its quite old, but still very useful.

--
Martin    | martin at 
Gregorie  | gregorie dot org
Reply to
Martin Gregorie

There are numerous tutorials at adafruit.com and the free e-mag 'magpi' at raspberrypi.org which would be helpful. The main language for pi is python. It should not be too difficult to control from a tablet - that would just mean you would need a pi onboard for control over wifi or bt.

Reply to
ray

On Mon, 21 Jun 2021 11:52:49 -0700, Aoli declaimed the following:

The R-Pi, with the default OS, comes out of the box with a graphical interface... It does require one use an HDMI connected monitor/TV, and a USB keyboard/mouse... Or one enables the VNC server and connects to it over network using a VNC client program.

You won't like this: Which ever one satisfies your requirements -- you could install GNAT and code in Ada; or suck up a few GB installing Lazarus/FreePascal.

Python was considered the "native" (as in: meant for beginners) language for the R-Pi...

Thing is, your requirements are so high-level that one can't really declare for any language. At the very least, you are looking at a multi-threaded application, if not multi-process system (threads exist within a single process, and depending on implementation, can share information easily; multi-process will require designing some interprocess communication system/protocol -- unfortunately, the only OSs for which I was comfortable in IPC are AmigaOS (message ports: linked lists created by processes to which other processes can append "messages"; since the Amiga used a flat address space, all processes could access any memory -- messages tended to contain just a pointer to a buffer) and [Open]VMS (mailboxes to which processes can write/read).

Take into account that Linux is not a "real-time" OS; an RT kernel just makes it more responsive without truly being real-time.

The process that handles the drive motors is probably going to run at a higher priority than one that is handling a camera and relaying video over WiFi. Higher than both of those might be the process that handles command receipt and return telemetry. It should spend a lot of time just waiting for commands, and some time packaging telemetry packets -- no busy loops, since that would interfere with lower level processes (a busy loop could totally block out the motor control process from responding to forwarded STOP commands).

You may even find that you only want to run the user-interface on the R-Pi, with it sending commands to multiple microcontrollers (Arduino, TIVA C, AdaFruit Metro [which uses CircuitPython natively, but can have an Arduino compatible loader installed]).

Some books that might be of import:

Real-Time Systems and Programming Languages: Ada, Real-Time Java and C/Real-Time POSIX 4th Ed (Alan Burns/Andy Wellings, 2009 Addison-Wesley)

Embedded Linux Primer 2nd Ed (Christopher Hallinan, 2011 Prentice-Hall)

{Interesting: both Addison-Wesley and Prentice-Hall appear to be owned by Pearson}

--
	Wulfraed                 Dennis Lee Bieber         AF6VN 
	wlfraed@ix.netcom.com    http://wlfraed.microdiversity.freeddns.org/
Reply to
Dennis Lee Bieber

On Mon, 21 Jun 2021 21:49:01 -0400, Dennis Lee Bieber declaimed the following:

On second thought -- you can probably ignore this one; it is mostly about configuring and building Linux kernels, and device driver stuff.

Not Linux-based, but of possible use for algorithms:

Make: Arduino Bots and Gadgets Make: Making Simple Robots Make an Arduino-Controlled Robot

Hands-On RTOS with Microcontrollers (FreeRTOS on an STM32, but should port to other ARM processors)

--
	Wulfraed                 Dennis Lee Bieber         AF6VN 
	wlfraed@ix.netcom.com    http://wlfraed.microdiversity.freeddns.org/
Reply to
Dennis Lee Bieber

then you can use a browser on the tablet to access the pi.

You probably will need a more 'native' language on te pi. Python or C really

--
To ban Christmas, simply give turkeys the vote.
Reply to
The Natural Philosopher

-- you

If you are going to multi-thread, normally I'd suggest going for Java, which also has a decent GUI system, SWING, especially using using the 3rd party RiverLayout layout manager from se.datadosen.

Disclaimer: I know thus works v. well in kit like Lenovo laptops but haven't yet installed Java on an RPi, so have no idea of the likely performance or compile speeds.

OTOH, if you'd rather go multi-process, then writing a set of co- operating processes in C that communicate through a block of shared memory and use semaphores to co-ordinate access is fairly easy, especially if you have the UNIX Systems Programming book I recommended.

A third way to go is to write a set of small, single threading C processes, each carrying out a single, well-defined part of the overall task.

One of these processes owns a block of shared memory and manages access to it: the blocks of incoming do not move once they are loaded into the shared memory by a loader process, where then remain until all operations on them are complete, and an exporter process has done the final operation and requested them to be deleted from shared memory.

The other worker processes operate by requesting access to a data block, which is locked as they get passed its address. After they've done their thing the worker tells the shared memory owner to release the last block and requests another: each stored block is either 'available' or marked as allocated to a worker process.

All interprocess communication is done by standard UNIX message passing, and is handled by the standard C poll() mechanism, which is brilliant for handling this sort of message queueing.

This sort of multi-process data handling stream works really well and is quite simple to implement: I designed and implemented a large, high performance ETL system, running on DEC Alphaserver kit, that transformed incoming data and loaded it into a data warehouse using this approach. It easily exceeded the stated performance requirements and, because each step was handled by a separate, single threaded process, the code was both simple and secure.

Could also be a good place to use PICAXE chips (despite their rather horrid compiled, unsigned intreger BASIC (but at least the PICAXE BASIC compiler runs well on an RPI 2B) or even a RaspberryPi PICO.

Don't forget the standard C poll() function I mentioned earlier, which can handle responses from microcontrollers as well as from other processes. See 'man 2 poll' for details.

--
Martin    | martin at 
Gregorie  | gregorie dot org
Reply to
Martin Gregorie

On Tue, 22 Jun 2021 13:53:54 -0000 (UTC), Martin Gregorie declaimed the following:

I could have suggested the Parallax Propeller chip. An extremely weird microcontroller.

8 "cogs" (cores) running in lockstep, all sharing the 32 GPIO lines. Nominally a core runs Propeller BASIC interpreter which accesses compiled code from external memory (there is controller that rotates from cog to cog allowing access to external memory), but can be loaded with Propeller assembly code. I believe there is also a C compiler available.

The former Parallax BASIC-Stamps (if any are still stocked) are way too expensive to consider. One could (in the US at least) buy 7-8 TIVA C TM4C123G Launchpads for the cost of BS2p. Launchpads are ARM Cortex M4 floating point. The 123 is the slower one, at 80MHz; 256kB flash, 32kB RAM, and a 2kB EEPROM; 12-bit ADC (2x), and way too many timers (6x64-bit, each of which can be split into a pair of 32-bit timers, AND 6x32-bit, also splitable into pairs of 16-bit timers). (The board actually uses a second TM4C123 chip to handle the interface for programming!). For beginners, TI cloned the Arduino IDE as Energia. (though for some reason the visit link under the IDE Help is going to Arduino site -- some programmer missed a URL I guess).

--
	Wulfraed                 Dennis Lee Bieber         AF6VN 
	wlfraed@ix.netcom.com    http://wlfraed.microdiversity.freeddns.org/
Reply to
Dennis Lee Bieber

You won't get your answer here. Go to

formatting link
and choose a topic. Also they have
formatting link

Reply to
A. Dumas

No need to go that deep for many apps. My pi runs a multi-process piece of code that simply uses a file in a small ram disk created for the purpose.

Obviously in physical terms it is shared memory, but in logical terms its just a file.

One process deletes it and rewrites it - the other process reads the data in it.

--
There is something fascinating about science. One gets such wholesale  
returns of conjecture out of such a trifling investment of fact. 

Mark Twain
Reply to
The Natural Philosopher

Re: Training By: Aoli to All on Sun Jun 20 2021 08:06 am

There is a relevant Linux Magazine article.

formatting link

It is for building a remote control boat that you can maipulate from your smartphone.

--
gopher://gopher.richardfalken.com/1/richardfalken
Reply to
Richard Falken

Re: Training By: Richard Falken to Aoli on Thu Jun 24 2021 05:17 am

Just in case it is not clear, I think the article is releband because it gives an starting point to work from. It should not be too hard to adaptthe tutorial from boats to a different vehicle.

--
gopher://gopher.richardfalken.com/1/richardfalken
Reply to
Richard Falken

That depends on the volume of stored data. I agree that an in-memory FS would work for gueuing small data items, but could turn out to be a bottleneck for handling large data items simply because moving them between processes and the FS becomes a bottleneck of you're handling lots of large items.

The ETL case I mentioned was a case in point: the data items were logfiles from a large network of message switches, each holding several MB of log records. The network was made of hundreds to thousands of switches, each writing a new logfile per day. Since the data sanitisation and reformatting process involved loading each logfile into RAM, passing it through 8-10 worker processes to convert each logfile to a common format: the switches were of various makes and models, so their log formats differed. As each logfile had been converted to the common format it was passed it to the datawarehouse loader and deleted from RAM. It was important for system speed that the sanitising and reformatting steps could run in parallel and that the logfiles were not copied between workers: consequently each logfile was loaded into RAM and a reference to it was passed to the logfile storage manager. This owned all the in- memory logfiles and kept track of the status of each one. Workers were handed a reference to the next logfile, applied the changes they were configured to make and then released the logfile, marking it ready for the next processing stage, which meant that, once a logfile was loaded into RAM , it stayed at the same physical address until the datawarehouse loader program added its content to the database and told the logfile manager to delete it from RAM.

The fact that each logfile occupied the same RAM address from being read in until it was loaded into the datawarehouse saved a LOT of mill time: If we'd used an in memory FS instead, each log would have had to be read into each worker and written back to the FS when that step was complete. We'd also have lost the ability to tune performance by running multiple copies of each worker.

Of course, but in the case I described, the mere fact that each logfile stayed at the same address once it was loaded into RAM and remained there until it had been loaded into the database and deleted, saved a lot of CPU cycles compared with reading it in to a worker, applying changes and then writing it back to the FS would have required.

Also, having a collection of simple processes, each applying a single operation to a single field type, made system configuration simple and the ability to run multiple copies of a slow process made system performance tuning much easier.

--
Martin    | martin at 
Gregorie  | gregorie dot org
Reply to
Martin Gregorie

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.