Hello to all the great makers, doers and creative people who are using Red, helping the Red Language grow and improve!
As always, there’s a standing invitation for you to join us on Gitter, Telegram or Github (if you haven’t already) to ask questions and tell us about your Red-powered projects.
Here are some recent highlights we’d like to share with you:
1. Tickets Get Priority
In the last month, our core team has closed a large number of tickets.We’d like to thank community members rgchris
, and dumblob
who are just a few of the passionate contributors putting Red through its paces and providing feedback as fixes and changes occur. @WArP ran the numbers for us, showing a cyclical growth pattern linking bursts of closed issues and some serious Red progress, and September’s not even done yet!…:
2. CSV Codec Available
Our newly updated CSV codec
has been merged in the master branch and is now a part of the nightly (or automatic) build here
. It is in an experimental phase, and we want your feedback
Should the standard codec only support block results, so it’s as simple as possible? Or do people want and need record and column formats as well (using the load-csv/to-csv
helper funcs, rather than load/as
)? Including those features as standard means they’re always available, rather than moving them to an extended CSV module; but the downside is added size to the base Red binary.
Applause goes to @rebolek’s excellent organization and his wiki
on the codec, which explains the various ways in which Red can represent data matrices. He writes, “Choosing the right format depends on a lot of circumstances, for example, memory usage - column store is more efficient if you have more rows than columns. The bigger the difference, the more efficient.”
You can judge their efficiency here
, where @rebolek has laid out the compile time, size and speed of each version, including encapping and lite. Be sure to get the latest build
, and chat with everyone on Gitter
to tell us what you think.
3. Red has reached 4K stars on GitHub!
We’re truly grateful for all the interest and support, and we are proud of the way our growth has been powered by this community.
4. AI + Red Lang Stack: Precision Tuning With Local OR Web-Based Datasets
In conversation with @ameridroid:
“Presently, it seems like most AI systems available today either allow building an AI from scratch using low level code (difficult and time-consuming), OR
using a pre-built AI system that doesn’t allow any fine-tuning or low-level programming…with the advent of NPUs (Neural Processing Units) akin to CPUs and GPUs, an AI toolkit would allow specifying what type of AI we want to perform (facial, image or speech recognition, generic neural net for specialized AI functions, etc.), the training data (images, audio, etc.) and then allow us to send it the input data stream and receive an output data stream…[using Red] would also allow us to integrate with the AI system at a low level if we have specific needs not addressed by the higher-level funcs. Red dialects would be a good way to define the AI functionality that’s desired (a lot like VID does for graphics), but also allow the AI components, like the learning dataset or output data stream sanitization routines, to be fine-tuned via functions. Red can already work on web-based data using ‘read or ‘load, or work on local data in the same way; the learning data for a particular AI task could be located on the web or on the local machine. That’s not easily possible with a lot of the AI solutions available today.”
Check back in the next few days for an update from @dockimbel!
Ideas, contributions, feedback? Leave a comment here, or c’mon over and join our conversation on Telegram
, or Github