Felipo: My own PHP framework

In my opinion when you have done a project that you like and you wouldn’t like to let it die, forgotten in some server in Slovakia, you open source it. That is exactly what I did with Felipo (editor note: Felipo is the name of the author’s cat).


If you see some similarities with other frameworks you are right. Felipo is strongly inspired in Ruby on Rails but written in PHP. Anyone interested in extending it will be welcomed. Just send me a mail and if you want to contribute your changes back create a pull request in Github. It is licensed under Apache so there are little restrictions on what you can do with it.

I will very briefly summarize some of its characteristics. You will notice that some parts are in Spanish and some in English. That’s because it started as a project in Spanish, not as framework and with the time it evolved into an independent piece of software. The language for new additions will always be English.

And finally you’ll notice the lack of unit tests in this project. Well that is bad and I know it. Again, anyone can add them now that it is open source.

Front controller architecture based on modules

The architecture is basically the same as many web frameworks and is based on the design proposed by Martin Fowler. The idea, essentially, is to separate the view from the controllers. All the request are redirected via mod_rewrite to the same php script (front_controller.php) that parses the request. This, based on the URI string, redirects the request to the right controller. This redirection is based on a convention in the url:


The system is organized in modules. For example one module can be the admin interface and another the front end. Each module corresponds to a directory in the apps directory. In the pageControllers directory inside each module you find the modules that are classes, and the actions are methods in those classes.

Multiple configuration environments

You can define a configuration for production, development, testing, etc. One environment will be selected based on different rules, currently there is a default one and another that is chosen based on the HTTP request “host” header (it is assumed that the production environment will have the real domain as the value for this header, and dev will have “localhost”). Based on that, the right config file is loaded.

Extensible via plugins

The initial idea was to make this system easy to extend. A plugin is a folder in the include directory. Among the plugins currently in the system it’s worth noticing the database plugin that implements the Active Record pattern and the REST plugin that adds controllers to expose the Active Record via a REST interface.

The plugins are loaded selectively according to the $config [‘plugins’] values in the configuration file.

Database connections via Active Record

Felipo implements a very lightweight version of active record. For example to save a person in a database:

class Person extends ActiveRecord {}
$person = new Person();
$person->id = 123;
$person->name = “Emmanuel”;
$person->lastName = “Espina”;

This will execute in the database:

INSERT INTO “Person” (id, name, lastName) VALUES (123, “Emmanuel”, “Espina”);

As you can see it is very simple (in simple cases). To load the person you do:

$person = Person::loadById(123);

The active records go in the models directory in each module.

Easy REST resources

Now that you have a Person represented as an active record you can expose it to the world with a ActiveRecordResource

class Person extends ActiveRecordResource {}

That goes to the resources directory. Currently for this to work you must have a corresponding active record in the models directory with the name Person_AR (I’ll fix this in the future)
Now if you want to get the person you create a rest controller inheriting from RestControllerGenerico

And send it a request like:


And you will get the person formatted as a JSON object.

What else?

To keep this post short I didn’t include elements like Authenticators (there are LDAP and Mysql based authentication modules). Also the login and session management is already included as another plugin.

Finally there is a set of validators and html form generators that take a specification of a form (as a JSON object) and creates the html and then can create validators on run time to test if it passes simple validations.

You can investigate all of these features reading the code (and documenting it if you want 🙂 )

The interesting thing about this framework is that it is small, and one of the main design decision was to make it fully modular by the use of plugins. Almost anything is a plugin even the database connectivity. This should keep the system simple enough for anyone to understand it and extend it relatively easy.

Multivalued geolocation fields in Solr

Today we’ll see a small workaround that can be used to accomplish a very common use case regarding geographic search.

The example use case is the following

The client is located in Buenos Aires and wants a purple tie

He enters the query “purple tie” in the search box from or store web page

The system returns the ties that can be purchased in the stores near Buenos Aires, and then in our stores in Montevideo (ie ties that can be found in nearby stores)

That is, the system does not returns ties in our stores in Spain because nobody would travel to Spain to buy a tie nor will order a tie across the Atlantic Ocean (no tie is so special). This is part of the first and probably only problem in search systems: return only relevant results. In this case relevant considering their location.

The problem in theory could be easily solved with a multivalued coordinate field, where each tie (or product in our system) will have a list of coordinates where that tie is located. We would perform a filtering of our products in a circle centered in Buenos Aires to get the nearby products.

This works fine if the product is available only in one store, but when it is in multiple locations simultaneously (ie, there is a list of coordinates in the field, not a single one) the problem arises. It happens that Solr does not allow filtering on multivalued coordinate fields.

But not everything is lost, and in this post I’ll propose a workaround to solve this issue.

We are going to create another index, containing only the Stores, their id, and location. Using a C style pseudo code we can consider this as a document in our “stores index”

struct {
latlon location;
int storeID;
} store;

Having this other index (another Solr Core for example) we are going to split our query in two:

1- For the first query you will query the “stores index”. You are going to perform a geographic filter query (using the spatial Solr functionality) and you will get a list of store ids near the central point you specified. You haven’t used the query entered by the user in this phase yet, only the location of the user got by some means.

q=*:*&fl=storeID&fq={!geofilt pt=-34.60,-58.37 sfield=location d=5}

And you obtain a list of store ids

id=34, id=56, id=77

2 – In the second phase you are going to perform the query (now providing the query that the user entered) in the regular way, but with the addition of a filter query (a regular filter query, not a geographic one) in the following fashion:

q=”product brand x”&fq=(storeId:34 OR storeId:56 OR storeId:77)

where the store ids are the ones returned by the first query and, consequently, are the ones near the central point.

In this case you are restricting your results to the ones near the user. You can also boost the nearest results but also showing the ones that are far away in advanced pages of the results.

To summarize, we use two queries with regular functionality to get the advanced functionality you desire. The first query gets the stores near the zone, and the second is the actual query.

There is a patch in development to accomplish the same functionality SOLR-2155 but is has not been commited yet. Meanwile here you have a good example of what you can do with multiple queries to Solr.

Ham, spam and elephants (or how to build a spam filter server with Mahout)


Something quite interesting has happened with Lucene. It started as a library, then its developers began adding new projects based on it. They developed another open source project that would add crawling features (among others features) to Lucene. Nutch is in fact a full featured web serach engine that anyone can use or modify. Inspired in some famous papers from Google about Map Reduce and the Google Filesystem, new features to distribute the index where added to Nutch and, eventually, those features became a project by them own: Hadoop. Since then, many projects have being developed over Hadoop. We are in front of a big explosion of open source code that was ignited by the Lucene spark.

All this projects are in some way related to content processing. For all of you interested in search and information retrieval, now we are going to talk about another project for a domain that’s outside the strict confines of search, but can teach you some interest things about content processing.

Recently I have been reading about this new library, Mahout, that provides all those obscure and mysterious machine learning algorithms, together in a library. A lot of modern web sites are using machine learning techniques. These algorithms are rather old and well known, but were popularized recently by its extensive use in social network sites (facebook knowing better than you who could be your best friend) or Google (reading your mind and guessing what you may have wanted to write in the search box) .

In simple words, we can say that a computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E.

For example if you wanted to make a program to recognize captchas you would have:

T: recognizing captchas

P: percentage of words correctly recognized

E: a database of captchas with correct words spellings

So, it appears that all they were needing was computing power. And they need a lot of computing power. For example, in supervised machine learning (I learned that term a month ago, don’t expect me to be very academic) you show the machine some examples (experience) of what you want and the machine extract some patterns from them. It is pretty the same that a person does when he is learning: from some examples shown to the person, he uses its past knowledge to infer some kind of pattern that will help him to classify future events of the same kind. Well the computers are pretty dumb in learning things, so you have to show them a lot of examples to infer the pattern.

Classification, however, is only part of the problem. There are three big areas that Mahout targets (and there will be more in the future because the project is relatively new). Classification, Clusterization, and Recommendation. Typical examples of these:

  • Classification: it is supervised learning, you give the machine a lot of instances of… things (documents for example) along with their category. From all of those the machine learns to classify future instances in the known categories.
  • Clusterization: similar to classification in the sense it makes groups of things, but this one is not supervised. You give the machine a lot of things, and the machine makes groups of similar items.
  • Recommendation: It is exactly what IMDb or Amazon does with the recommended movies or books at the bottom of the page. Based on other users likes (which are measured through the ranking stars that the user vote) Amazon can infer “other users that liked this book also liked these others”.

If you want a better definition of these three categories you can go to Grant Ingersoll’s blog.

Offline example

Now I’ll show you a simple program that I think is a pretty obvious classification example. We are going to classify mail into spam and not-spam (that is called ham by people that researches it). The steps are the following:

  1. Get a ham/spam corpus from the web (already classified of course).
  2. Train a classifier with 80% of the corpus, and leave 20% for testing
  3. Create a simple web service that classifies spam online. You provide it a mail, and it will say “good” or “bad” (or “ham” or “spam”)

The corpus we are going to use is the one from Spam Assasin, an open source project from Apache that is an antispam filter. This is a tutorial, so we are not trying to classify very difficult mails just to show how simple it can be done with Mahout (of course, the difficult stuff was programmed by the developers of this library). This corpus is a pretty easy one and the results will be very satisfying.

Mahout comes with some examples already prepared. One of those is the 20 newsgroups example that tries to classify many mails from a newsgroup into their categories. This example is found in the Mahout wiki, and luckily for us, the format of the newsgroups are pretty the same as our mails. We are going to apply the same processing chain to our mails than the 20 newsgroup. By the way we are going to use an classification algorithm called naive Bayes that uses the famous Bayes theorem, and that I already mentioned in a previous post. I’m not going to explain how the algorithm works, I’ll just show you that it works!

Mahout has two driver programs (they are called that way because they are also used in Hadoop to run as map reduce jobs), one for training a classifier, and the other to test it.
When you train the classifier you provide it with a file (yes, a single file) that contains one document per line, already analyzed. “Analyzed” in the same sense that Lucene analyzes documents. In fact what we are going to use is the Lucene StandardAnalizer to clean a little the documents and transform them into a stream of terms. That stream is put in a line of this training file, where the first term is the category that the item belongs. For example the training file will look like this

ham new mahout version released
spam buy viagra now special discount

A small program comes with Mahout to turn directories of documents into this format. The directory must have an internal directory for each category. In our case we are going to separate our test set into two directories, one for testing and the other for training (both in <mahout_home>/examples/bin/work/spam where <mahout home> is where you unzipped the mahout distribution).
In each of them we are going to put a spam directory and a ham directory.





We manually take about 80% of the ham and put it in train/ham, the rest int test/ham, and the same with the spam in train/spam and test/ham (it has never been easier to prepare the test set!!!)
Next, we are going to prepare the train and test files with the following commands

bin/mahout prepare20newsgroups -p examples/bin/work/spam/train -o examples/bin/work/spam/prepared-train -a org.apache.mahout.vectorizer.DefaultAnalyzer -c UTF-8
bin/mahout prepare20newsgroups -p examples/bin/work/spam/test -o examples/bin/work/spam/prepared-test -a org.apache.mahout.vectorizer.DefaultAnalyzer -c UTF-8

Default analyzer is a Lucene analyzer (actually it is wrapped within a mahout class)

We are going to train the classifier. Training the classifier implies feeding mahout with the train file and letting him build internal structures with the data (yes, as you can deduce by my use of the word “internal”, I don’t have any idea how that structures work).

bin/mahout trainclassifier -i examples/bin/work/spam/prepared-train -o examples/bin/work/spam/bayes-model -type bayes -ng 1 -source hdfs

The model is created in the bayes-model directory, the algorithm is Bayes (naïve Bayes) we are using Hadoop Distributed File System (we are not but you tell that to the command when you are not using a distributed database like Hbase), and ng is the ngrams to use. The ngrams are groups of words. Giving more ngrams you add more context to each word (surrounding words). The most ngrams you use the better the results should be. We are using 1 because the better results obviously cost more processing time.

Now we run the tests with the following command

 bin/mahout testclassifier -m examples/bin/work/spam/bayes-model -d examples/bin/work/spam/prepared-test -type bayes -ng 1 -source hdfs -method sequential

And after a while we get the following results

 Correctly Classified Instances : 383 95,75%
 Incorrectly Classified Instances : 17 4,25%
 Total Classified Instances : 400
 Confusion Matrix
 a b <--Classified as
 189 11 | 200 a = spam
 6 194 | 200 b = ham

Very good results!!!

A server to classify spam in real time

But we haven’t done anything different from the 20newsgroup example yet! Now, what can we do if we want to classify mails as they are coming. We are going to create a antispam server, where the mail server will send all the mails that it receives and our server will respond if it is ham or spam (applying this procedure)

The server will be as simple as we can (this is just a proof of concept):

public class Antispam extends HttpServlet {

private SpamClassifier sc;

public void init() {
 try {
 sc = new SpamClassifier();
 sc.init(new File("bayes-model"));
 } catch (FileNotFoundException e) {
 } catch (InvalidDatastoreException e) {

protected void doPost(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException {

Reader reader = req.getReader();
 try {

long t0 = System.currentTimeMillis();
 String category = sc.classify(reader);
 long t1 = System.currentTimeMillis();

resp.getWriter().print(String.format("{\"category\":\"%s\", \"time\": %d}", category, t1-t0));

} catch (InvalidDatastoreException e) {



As we are going to do a very simple example we are going to use a simple servlet. The important class is SpamClassifier

public class SpamClassifier {

private ClassifierContext context;
 private Algorithm algorithm;
 private Datastore datastore;
 private File modelDirectory;
 Analyzer analyzer;

public SpamClassifier(){
 analyzer = new DefaultAnalyzer();

public void init(File basePath) throws FileNotFoundException, InvalidDatastoreException{

if(!basePath.isDirectory() || !basePath.canRead()){
 throw new FileNotFoundException(basePath.toString());

modelDirectory = basePath;

algorithm = new BayesAlgorithm();
 BayesParameters p = new BayesParameters();
 p.set("basePath", modelDirectory.getAbsolutePath());
 datastore = new InMemoryBayesDatastore(p);
 context = new ClassifierContext(algorithm, datastore);

public String classify(Reader mail) throws IOException, InvalidDatastoreException {
 String document[] = BayesFileFormatter.readerToDocument(analyzer, mail);
 ClassifierResult result = context.classifyDocument(document, "unknown");

return result.getLabel();


You have a datastore and an algorithm. The datastore represents the model that you previously created training the classifier. We are using a InMemoryBayesDatastore (there is also HbaseBayesDatastore that uses the Hadoop database), and we are providing it the base path and the ngrams size. We are using ngrams of 1 to simplify this example. Otherwise it is necessary to postprocess the analyzed text constructing ngrams.
The algorithm is the core of the method and it is an obvious instance of the Strategy design pattern. We are using BayesAlgorithm but well we could have used CbayesAlgorithm that uses the Complementary Naive Bayes Algorithm.
ClassifierContext is the interface you’ll use to classify documents.

We can test our server using curl:

curl http://localhost:8080/antispam -H "Content-T-Type: text/xml" --data-binary @ham.txt

and we get

{"category":"ham", "time": 10}


As we have seen, the spam filtering process can be separated into two parts. An offline process where you have a lot of mails already classified by someone, and train the classifier. And an online process where you test a document to classify it using the model previously created. The model can evolve, you can add more documents with more information and after you perform the offline processing you update the online server with the new model. The model can be very big. This is where Hadoop enters the scene. The offline process can be sent to a cluster running hadoop, and using the same libraries (Mahout!) you perform what looks like the exactly same algorithms and get the results faster. Of course the algorithms is not the same because it is been executed in parallel by the thousand of computers that you surely have in your cluster (or the two or three PCs you have). Mahout was designed with this in mind. Most of its algorithms were tailored to work over Hadoop. But the interesting thing is that they can also work without it for testing purposes, or when you must incorporate the algorithms to a server without the needs of distributed computation, like how we did in this post. The combination of its possibilities to be user over a cluster and to be embedded to a application makes Mahout a powerful library for modern applications that use data of web scale.

Hello world!

Welcome to my first blog! I’ve had the idea of writing a blog for a very long time, and for some reason i never started working on it. Probably because i thought that nobody would be interested in what i had to write. Or more possibly because i’m lazy.

Anyway, in my new job (which maybe will be the subject of another post) they give a lot of importance to the research and development. And they encourage us to keep a blog and write about all things that we came up with. But this is just the excuse that somebody lazy like me needs to start a new project: my so anticipated blog in this case.

This blog will be mainly of geek stuff. Written by a geek and aimed to geeks. The software that we use is Solr. This is a big search application that is based on Lucene, which is an inverted index. Oddly, Solr doesn’t have a lot of bibliography as other Apache projects has. I found only one book by Packt Publishing, and nothing else. Of course there is a lot of documentation on the website, but that’s not the same. So, part of this blog will be about the researching part of my work, creating benchmarks and posting the results and all that. And also about topics that have nothing to do with solr but I always wanted to talk about, just because I found them interesting.

Finally I tell you that English is not my native language, but as we all know, Esperanto didn’t have the success that it creator expected and English was adopted as the universal language (natural selection perhaps). So i’ll be writing all my posts in English, and that’s why you’ll find some strange grammar and vocabulary in my blog!!!

I hope you all like this blog-project!