Aug 23, 2016

Newest open source project: managed-commons-collections-squarelist

I've published the source for a tentative 1.0.0-beta for this MIT-Licensed project at GitHub.


What is a SquareList?


Basically a squared (same depth as width) structure formed from a List of LinkedLists, with items kept in ascending order, so that in constant time you can know the minimum and the maximum values, O(1). Also by being almost 'square' you can insert/search/delete values in a worst case time of O(sqrt(n)).


Implementation details


I implemented it as a generics-based collection, in C#, for .NetStandard1.0 .

Loosely based on an article published in the May 2013 issue of Dr Dobb's Journal, by Mark Sams.

This implementation allows for multiple instances of the same value to be inserted, and then deleted one-at-a-time or wholesale.
Thinking in performance, search for counting/existence is done bidirectionally, but removal scans only in the forward direction, to have stable behavior (remove the first found) when duplicates exist.

The trade-off is obviously about using more memory to achieve better performance, as there is no copying of large swaths of memory, or linear scans of the total mass of data, but the downside is lots of pointers from each node in both directions.

UPDATE: No longer using linked-lists for the columns, these now manage slices of a big array containing the whole set of values. So inserts now are penalized with moving blocks of values around, but the minimum subset is moved to create spaces for the insert. Problem still being worked on is about performing searches on sets that have a large part of it deleted. Shrinking the array is now done manually while expanding it is still automatic.

Jun 23, 2015

Experiment on Roslyn C# compiler: Translatable Strings

Basically anyone can use resources, gettext or managed-commons-core, to translate (localize) strings in its C# code, and it can even be kind of terse like this sample using managed-commons-core:
using System.Collections.Generic;
using Commons.GetOptions;
using static System.Console;
using static Commons.Translation.TranslationService;

namespace TestApp
{
    class AppCommand
    {
            // Returns the translated form of "First mock command"
            public virtual string Description { get { return _("First mock command"); } }

            public virtual string Name { get { return "alpha"; } }

            // Returns the translated form of "Command {0} executed!" with Name substituted
            public virtual void Execute(IEnumerable args, ErrorReporter ReportError)
            {
                WriteLine(TranslateAndFormat("Command {0} executed!", Name));
            }
        }
}
Then enters C# 6.0 with its fantastic new feature called interpolated strings and now that last method in our example can't be optimized to use the new feature because:
public virtual void Execute(IEnumerable<string> args, ErrorReporter ReportError)
{
   WriteLine(_($"Command {Name} executed!"));
}
would in truth first format and then try to lookup a translation, which would be truly the wrong thing to happen...
This experiment, I've started, would allow for C# 7 a new syntax for translatable strings that would make that snippet into:
using System.Collections.Generic;
using Commons.GetOptions;
using static System.Console;

namespace TestApp
{
    class AppCommand
    {
            // Returns the translated form of "First mock command"
            public virtual string Description { get { return $_"First mock command"; } }
            public virtual string Name { get { return "alpha"; } }
            // Returns the translated form of "Command {0} executed!" with Name substituted
            public virtual void Execute(IEnumerable args, ErrorReporter ReportError)
            {
                WriteLine($_"Command {Name} executed!");
            }
        }
}
Interpolated strings can return an IFormattable, and thus one can easily do some localization (number formatting for instance), but not truly translation, so this feature is interesting beyond the small gain on shortening code, for the other cases.
But the killing feature that adding this tentative feature to the compiler would allow us is to have the extraction of translatable texts done by the compiler, as it does for xml documentation, if the right command line parameter is specified.
$_"Command {Name} executed!" would be extracted as "Command {0} executed!", automagically.
All is well but some may ask as this, which looks a lot like the way gettext does things would work for extracting to a .resx file, where keys can't be arbitrary strings. Well for this scenario the compiler would generate SHA1 hashes as keys and insert the hashing while calling the TranslationService behind the scenes. TranslationService is a pluggable infrastructure that can have 'translators' sourcing their translations on resources, .mo files, hard-coded dictionaries, whatever...
My experimentation will use managed-commons-core, which I'm the core developer/maintainer, as the backend but if real merit is found on this discussion, surely the runtime team will have to come forward and implement something like it, or just borrow the logic from my implementation there, which MIT-licensed.

  1. Code
  2. Issue

Jul 28, 2013

Testing Blogger's Android App

Just a test. Later I'll post something interesting, I hope!

Aug 7, 2012

Event: Interopmix & Mobile Games August 24/25 at BandTec College in São Paulo (Brazil)


The gaming industry has been earning usually double what the billionaire movie industry has been getting in the last years, and still there is a lot to grow. Particularly in Brazil it is just in its infancy, so there is a lot of opportunities just ahead.

But to seize those opportunities, it is not enough to show some desire and entrepreneurial spirit, you need refined knowledge and technical mastery, since the creation of a game demands to build teams with skills in various areas such as programming, graphic design, 3D modeling.

To further the spreading of that knowledge, engage new professionals, and show what's new in this area, Bandtec college will be receiving the event: Interopmix Mobile & Games in São Paulo.

We will present new technologies based on interoperable .NET based platforms, specially Mono, for the development of mobile games and applications.

Among the technologies which will be presented, we have: .Net, Mono, C# Language, XNA, Unity3D, etc... Targeting Windows, Linux, Mac, iPhone, Android, Windows Phone, PlayStation VITA among others.

Renowned presenters include:
  • José Antonio Leal de Farias (Microsoft MVP - Creator of MonoGame)
  • Claudenir C.Andrade (Microsoft MVP Integration and Development Manager, R & D)
  • Rodrigo Kumpera - Mono Hacker (GC) of Xamarin
  • Breno Azevedo - Game Design - Hired by Electronic Arts LA and producer Taikodom
  • Alexandre Rocha Lima - Mono Hacker - worked on the project MonoBasic among others!
The event will take place August 24th/25th at BandTec College campus in São Paulo.

If you are interested in attending the event or watch it through the Internet make your pre-registration at the event site above.

Be sure to pre-register for this event, because although event is FREE of charge there a limited number of places!!

Hurry and make your application.

Some of the event coming sessions (all in Portuguese):
  • The state of development in the mobile world 2012
  • Mobile development with WINEmbedded and tablets
  • Mobile development with MonoTouch with Mono 4 Android
  • Combining Scripts with Unity3D Editor to create scenes in 3D games
  • RageTools: Developing characters in 2D
  • MonoGame, XNA open source multiplatform
  • Creation of Corporate Games
  • Introduction to game development with Scratch
  • Creating games for the PlayStation Vita with Mono
Mini-Courses (all in Portuguese):
  • Character Animation with Unity3D
  • RageTools: Developing characters in 2D
  • POS automation with Android
  • Development with MonoDroid
  • MonoGames, XNA for MAC and iPhone
  • iPhone Development with MonoTouch

Mar 22, 2012

Summing up my position on Software Patents to Brazilian Authorities

They [software patents] undermine innovation and thus slows/reverts the improvement of the living conditions of the majority

My email to SAESP regarding the Consultation on Brazilian Guidelines for Patenting Software:

Premise: As the Brazilian government is established as a democracy and therefore is 'for the people', favoring elites, domestic or foreign, is simply a very serious crime that should not be perpetrated.


The mathematical background of software development and its cumulative nature even more than in most other applied sciences, makes the patenting of any algorithm, or grouping of these (and all software can be reduced to groups of algorithms) a major obstacle to the development of thousands of features not imagined by the patent bidder, it already configures an abuse of the monopoly that could be granted.

There is no way to make software patents so specific, in contrast to the widely generic scopes described above to prevent abuse or to assess the real originality of the 'invention', and it just would make the approval process for patents more expensive and lengthier, and the same would happen for the infringement processes, as we see exemplified in the ongoing disputes in the U.S., Europe and Asia, especially in the ​​Mobile arena.

It's a trap, that favors, in the short-term, only a few large corporations, who instead of actually investing in innovation, by collaboratively working on certain fronts to bring down the total cost of invention, spend more on lawyers than engineers to ensure that outdated technologies or marginally better ones stay exclusive for a absurdly long period compared to the pace of innovation needed to meet the needs and desires of the Brazilian society, as well as the global one.

In the medium and long term the very corporations fall on their own trap and now risk losing billions in long processes of dubious legal validity.In conclusion, Brazil has to swim against this suicide tide, unwisely lobbied by transnational corporations, and establish a position against software patents, and adopt measures that effectively can foster increases in the pace and quality of innovation, by the way of information sharing and subsequently ensued collaborative development.

The potential for market differentiation, which is vital to maintain competitiveness, doesn't come from technological innovation per-se but instead from how well finished the final product comes out, especially the expanded products, which aggregate services and easy interoperability as key components.Thanks for listening,Rafael "Monoman" TeixeiraElectronic Engineer (Poli-USP)Systems Developer (for the last 35 years)

* thanks Google Translator for doing the bulk of this translation, fine tuning it afterwards was way easier...

Feb 16, 2012

My long post on G+ about patents

https://plus.google.com/117788095721240811664/posts/VfMqiKW53St

Discuss there

About Patents. Again

For me, in the regard of granted temporary intellectual monopolies (patents, but also some non-artistic copyrights), we as a global society need to answer two big questions, and then act politically to change ours' countries laws and international agreements:

1) Do we need more innovation? At what pace?
2) Do we want those innovations to be widely available and affordable?

My personal answers to those questions are:

1) Surely we need and as fast as collectively possible to create the innovations and to materialize and put them to use. And I think that, excepting a few Luddites and Zen masters, this would be the near unanimous answer. 
2) I believe that to be also a qualified yes. But here lies a more controvertible topic, as many people (Liberals, etc.) want those innovations to be affordable to them and their circle of friends, but doesn't care if it is available and affordable to others.

Now to recap the logic, the original one, about patents:

*[A] There are a very small number of people with enough knowledge in their field of expertise to be able to innovate in that space.
*[B] These people would prefer to retain that knowledge and dispense it only to a few chosen apprentices, so that their small circle can benefit directly and/or commercially from it.
*[C] A temporary monopoly on new knowledge, with legal enforcement of it, would be the bait to make these people make that knowledge public, and free for anyone to use after the monopoly period.
*[D] Some non-obviousness and novelty criteria would be applied to recognize new knowledge that could be protected by that temporary monopoly.

Well lets focus on the Information Technology field, what encompasses software, chips, web, smart mobile devices, and many more hot trending technologies...

[A] Obviously there are some niche sub-fields (like 'ultra high density chip making') where that scarcity of knowledgeable people still holds, but for most of the field there are plenty of people well versed to innovate: for instance, tens of millions of software/web/mobile developers (even if you disqualify our digital teens), hundreds of thousands of electronics engineers. 

Yeah, I'm counting global numbers, as in our connected world there is little to prevent innovation coming from geographically distributed, multiple nationality, teams.

So the scarcity assumption is likely false nowadays, at least in this field...

[B] Let me ask the meat issue here: What is the likelihood that someone can nowadays substantially advance any field without an extensive network of collaboration? Remember that for collaboration to occur one need to share/exchange knowledge with those that participate in it, but yes, collaboration can be done in a behind-doors, limited sharing, fashion.

Nevertheless, again in this field, open collaboration, with free participation from anyone who wants to contribute, as demonstrated by Free and Open Source Software projects/communities, leads to lower cost, high quality, and widely available innovation, which is slowly becoming the norm.

Also the number of competitors in the web space, for instance, where the cost of entry is very low, simply means that mere copycats can't survive the market, as it evolves astonishingly fast: for example, cellphones are increasingly sophisticated gadgets that billions of people, some still functionally illiterate, managed to learn how to use in the last decade, and those billions as they migrate to smartphones are creating an exceptionally large new market for mobile applications and content.

In that vein, I believe that even if people want to hold out knowledge it will be shared by someone else that independently came to the same conclusion/idea and wants to collaborate to make it evolve even more. So that assumption is not necessarily false, but mostly irrelevant.

[C] If knowledge is increasingly shared, and collaborations over it allow for even more knowledge to be found/built, well any legal monopolies over it will effectively hamper not foster innovation. It will add process costs to negotiate terms for 'licensing' what will be shared, even if it will be licensed free of royalties, and running costs for any RAND licensing.

But I would propose here a crowd-task: Can we find the global numbers to correlate all the royalties collected, and then factor out how much of it was re-invested in corporate Research and Development or grants to scientific research at academia? Also it would be nice to value how much those patented innovations benefit from open science knowledge, can we find and sum it up?

With those numbers it would be easy to reason, if opening even more the basic science and fomenting the open/collaborative projects for technology, would be a better approach than letting corporation compete armed with their monopolies.

Besides the form of the temporary monopoly was very badly designed, as it allows the monopoly holder to preclude all usage of the knowledge until the privilege expires, even if the holder doesn't materialize any products embodying that knowledge in that period. It is such a nonsensical provision that I can't understand how it wasn't fixed over the centuries. It is the kind of non-sense that allows judges to say something like "that Microsoft can legitimately use patents to try to destroy Android" with a flat-face.

But worse, even if the monopoly could be made conditional on materializing as soon as possible some of the benefits the protected knowledge could bring, there is the issue of it's duration, that is also something in the law that didn't evolve to cover the completely new situation we live in. For this field, 20 years of protection, is simply too much, even market-wise it is an absurd to have someone holding competitors from offering alternatives or evolving over present ideas.

For example, most users won't keep using non-essential web/mobile apps that doesn't evolve each few months/weeks. I keep coming to Angry Birds on my Tablet because new phases, birds, themes are added from time to time...

At a minimum, even if this whole argumentation doesn't politically engender more drastic revisions, we need to reform the granted monopoly, to guarantee timely access to the benefits and reduce, maybe selectively with the field, its duration. 

So [C] can be assumed to be false for the field pending those numbers, but still quite positively, given the track record of success of FOSS projects and of products embodying them. 

If Android, which deeply depends on FOSS and evolves on wide networks (sadly, not all truly open) of collaboration, wasn't a huge success it would not be so feared and the target of such a high number of patent lawsuits. 

Finally, there is plenty of evidence that the law clauses construed from [D] were totally forgotten on the practice of Patents offices, throughout the world. Even returning to more sensible practices, to avoid big losses around lame patents like in the EOLAS case, would be an enormous improvement, but the assumption is false, because it depends too much on interpretation, and due diligence, to make the system well balanced.

The conclusion is simple, the system doesn't work because:

1) It doesn't guarantee the timely reaping of benefits from the shared knowledge it should promote.
2) It costs too much to operate with the current imbalances in power

Which are in direct collision with positive answers to those two questions I started all this rambling with.

Alternatives?

I'll present mine, but I want just to help starting the discussion with all stakeholders (roughly everybody in the world, even those currently 'unconnected'), and also for the political action to take place on its conclusions, and not be misdirected by the lobby of just some powerful corporate stakeholders... 

The design key. Global Cooperation, Local Coopetition

My preferred scenario would be to have a few years down the road government funding truly open science, and for industries collectives funding open research and collaborative projects for reference designs and standards, with local customization and production. It is globalized research, as the ever increasing scientific-technological knowledge body is one of the truly global commons, partially global project (global core, local customization), and as much as possible local production.

For that to work patents should be dismissed for open global knowledge pools, with 'research credits' being acquired by adding to the pool with meritocratic valuation and being expended by receiving financial grants from the pool for further research.

The fundamental change given positive answers for those initial questions is to transform:

Closed Knowledge equates Profit (all the ethically-wrong IP 'industrialization', by artificially faking scarcity over a plentiful inexhaustible resource)

into

Open Knowledge equates Innovation, Materializing/Distributing Innovation equates Profit
That is tall order, to do locally (within the national borders) and globally (overriding the competition of nations [with borders] and transnational corporations [no borders]).

The final thought is:

Innovation, per se, isn't an intrinsically valuable thing, what makes it valuable is its ability to solve problems, to attend needs or to enable further innovation that accomplish that. 

If we could solve all our problems and attend the needs of everyone with what we know about or know to build, there would not be the need for innovation, but that is simply impossible, the universe won't stay put, it also evolves, it also innovates every second...


Dec 8, 2011

About "Convergence" TV/Computer Devices

From my G+ post: https://plus.google.com/u/0/117788095721240811664/posts/gurPTpYzMGm


The problem is: if you have a general-purpose computer doublying as your TV set, the interactions for some computer-like usage scenarios become a lot harder:
Have you ever tried to type on a full keyboard standing on your lap on a common sofa, or worse to mouse-around with no plain surface or having to stretch your arm to touch on a far away big screen, not even Siri can help you while you interact with office suites, drawing programs, email clients, sophisticated web apps/sites, and other high complexity applications.

You really need simpler more focused apps, like you have in mobile, to work in such a configuration, and that is the whole point, it is really software that is holding up the "Convergent" (to use the old word) TV+Computer.

We need a base platform that uses news interaction mechanisms like Siri, Kinect, remote touching, etc... And more focused apps to exploit those mechanisms to bring it truly to live. 

I'm working on it, as many others, but I don't think even Apple can really make it happen in the next year, and I would say it could cause a bad 'First Impression' to launch a not up-to-the-task device...



Commenting on a Chris Davies post citing Apple TV-enabled iMac tipped for 2012 television attack - SlashGear