var ebf: Blog // by Eduardo Fonseca

DJI Mavic Pro

I’m really excited about this Drone. Can’t wait for it to arrive.

Why Leica

One of the cool perks of joining Cargomatic is the ability to have 1:1s by the ocean. Add to the mix that, as a photographer, Venice is a place with so much to shoot. Street art. Magnificent ocean views. My beloved palm trees.

Since I bought my Leica M-P, I try to take it with me everywhere. It’s always in my bag, and I’m always shooting, trying to improve my technique and how to meter by myself (still learning that!).

Three days ago I realized that the framelines weren’t being shown correctly - the mechanism wasn’t locking properly in place.

So I called Leica, and asked for an warranty repair. All good, quick and painless. But the expected repair duration was 2-3 weeks. Wow. So long.

After figuring out how to ship the camera, I picked up my old friend: The Sony α7. This is the camera that opened up the world of Vintage Lenses for me and actually brought me into Leica.

The funny thing is, I can’t shoot with it anymore: First, the weight. My Summilux is so heavy, but the M-P body actually balances it really well. The Sony is so light that the balance of the camera is completely off.

Second, by looking into the EVF, I now understand the whole “tunnel vision” effect that Leica shooters talk about (and that I used to scoff).

The M-P is so simple and gives me the ability to see what’s outside the point of view of the lens:

Sony:

Leica:

As you can see above, the focus peaking from the Sony is really interesting and nails the focus most of the time - wide open, I still need to zoom to nail it. I focus much faster on the M-P, mostly because I’m completely adapted to the rangefinder patch now.

And third, look how crowded the Sony is. It’s a lot of information to grasp while trying to nail your exposure. Honestly, I feel like I have the Terminator Vision:

I'LL BE BACK

Long time ago, Aline and I stumbled across a book called High Tech, High Touch. I think that this book summarizes perfectly how I feel about the Leica: It’s really high tech (digital, great sensor) and high touch - you produce your photograph yourself, without (lots of) help from an automated system. Focusing, Aperture, ISO, Shutter speed - it’s all up to you.

Maybe it’s time to sell the Sony and invest in a film Leica. Maybe. :smile:

WWDC 2014

WWDC 2014 is being a blast. I love the new changes (I was secretly hoping for some specific ones) and I’m much more confident on iOS’s future as an amazing, developer-friendly platform.

Ars Technica (via the amazing Andrew Cunningham) asked me some questions about iOS 8, Swift and more. Check it out!

Developers react to iOS 8 and its long-awaited opening of the platform

Explaining iOS 8’ extensions: Opening the platform while keeping it secure

Cars

These are my first pictures with the Leica Leitz Summicron M 50mm f/2. I love the colors and the “Leica Glow”. A great combination with the Sony A7.

Thoughts on the Completion Block Pattern

Nowadays, most Open Source iOS libraries adopt the “Completion Block Pattern” that Apple introduced with iOS 4. That’s great, and I’m all for consistency. The problem is that most authors are not being careful or clear on which queue will the completion block be dispatched to.

Some assume that you must want it to be dispatched to the main queue. Others, just run it on whatever queue their code ran before. Heck, not even Apple is consistent.

I propose the following:

  • Allow the caller to specify a queue to dispatch the block;
  • Make it very clear on your method signature which queue the completion block will be dispatched to, e.g., doSomethingOnTheBackgroundWithThis:withMainQueueCompletionBlock:;

Sometimes, less is more. On this case, be verbose! Making the wrong assumption can be the source (of a lot of) unnecessary work.

Optimizing cv::drawContours on iOS

OpenCV requires no introduction - it is amazing1. The portability and speed makes any project that requires Computer Vision a breeze. And being open source really helps.

But, sometimes moving across platforms can be tricky. In our case, the problem was that the cv::drawContours call ran almost instantly on the iOS Simulator, but took 20-45 seconds on the device. Not good™.

Instruments gave me a clear picture of what could be happening:

Holly… lot’s of std::vector<> appends, inside cv::drawContours(). Time to understand this better. Let’s go back a little on how to detect contours with OpenCV.

Look at that silhouette!

If you follow OpenCV’s documentation, you will see that detecting contours inside an image is pretty straightforward:

Mat canny_output;
vector<vector<Point> > contours;
vector<Vec4i> hierarchy;

/// Detect edges using canny
Canny( src_gray, canny_output, thresh, thresh*2, 3 );
/// Find contours
findContours( canny_output, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE, Point(0, 0) );

/// Draw contours
Mat drawing = Mat::zeros( canny_output.size(), CV_8UC3 );
for( int i = 0; i< contours.size(); i++ )
{
	Scalar color = Scalar( rng.uniform(0, 255), rng.uniform(0,255), rng.uniform(0,255) );
	drawContours( drawing, contours, i, color, 2, 8, hierarchy, 0, Point() );
}

Not a big deal. You feed cv::findContours() with the output of cv::Canny() and then iterate over the results, drawing the contours with cv::drawContours(). Easy.

Try it on the desktop. It flies. So, what the heck is happening on iOS then?

The cost

That left me stumped. Ok, our app have 18k-20k contours to be drawn, compared to simpler examples online. But it should be crazy fast, after all, we are just drawing what we have already detected.

That’s when the beauty of Open Source comes in. I decided to check what was happening inside cv::drawContours and opened the source file for an analysis.

There I found this snippet:

std::vector<CvSeq> seq;
std::vector<CvSeqBlock> block;

...

seq.resize(last);
block.resize(last);

for( i = first; i < last; i++ )
	seq[i].first = 0;

A little down below, things get even more interesting.


    if( contourIdx >= 0 )
    {
        CV_Assert( 0 <= contourIdx && contourIdx < (int)last );
        first = contourIdx;
        last = contourIdx + 1;
    }

    for( i = first; i < last; i++ )
    {
        Mat ci = _contours.getMat((int)i);
        if( ci.empty() )
            continue;
        int npoints = ci.checkVector(2, CV_32S);
        CV_Assert( npoints > 0 );
        cvMakeSeqHeaderForArray( CV_SEQ_POLYGON, sizeof(CvSeq), sizeof(Point),
                                 ci.data, npoints, &seq[i], &block[i] );
    }

    if( hierarchy.empty() || maxLevel == 0 )
        for( i = first; i < last; i++ )
        {
            seq[i].h_next = i < last-1 ? &seq[i+1] : 0;
            seq[i].h_prev = i > first ? &seq[i-1] : 0;
        }
    else
    {
        size_t count = last - first;
        CV_Assert(hierarchy.total() == ncontours && hierarchy.type() == CV_32SC4 );
        const Vec4i* h = hierarchy.ptr<Vec4i>();

        if( count == ncontours )
        {
            for( i = first; i < last; i++ )
            {
                int h_next = h[i][0], h_prev = h[i][1],
                    v_next = h[i][2], v_prev = h[i][3];
                seq[i].h_next = (size_t)h_next < count ? &seq[h_next] : 0;
                seq[i].h_prev = (size_t)h_prev < count ? &seq[h_prev] : 0;
                seq[i].v_next = (size_t)v_next < count ? &seq[v_next] : 0;
                seq[i].v_prev = (size_t)v_prev < count ? &seq[v_prev] : 0;
            }
        }
        else
        {
            int child = h[first][2];
            if( child >= 0 )
            {
                addChildContour(_contours, ncontours, h, child, seq, block);
                seq[first].v_next = &seq[child];
            }
        }
    }

In a nutshell, OpenCV is initializing and processing the whole contours vector everytime the loop iterates. That’s not very efficient when you have 18k contours to be processed on a mobile device. So, as an experiment, I tried to optimize a bit on my side.

Hey. Look at that.

Since we are doing only a top level search on the contour hierarchy, this did the trick for me:

for (; idx >= 0; idx = hierarchy[idx][0]) {
	std::vector<std::vector<cv::Point> > contour;
	contour.push_back(contours[idx]);
	std::vector<Vec4i> plainHierarchy;
	plainHierarchy.push_back(hierarchy[idx]);

	Scalar color = Scalar( rng.uniform(0, 255), rng.uniform(0,255), rng.uniform(0,255) );

	drawContours(drawing, contour, 0, color, 2, 8, plainHierarchy, INT_MAX);
}

And the results:

Touché. From 20-45 seconds to ~200ms. Mission accomplished!

  1. Yes, I know Objective-C++ is weird. But, as a C++ geek, I find std::vectors of NSIntegers fun :)

Playing with the Helios 44m-4

In a nutshell: I love this lens.

UIPhotographer

It’s all Everaldo’s fault

I kept repeating this to myself as we were hiking towards some woods at 1am, trying to take this picture. The area around the Golden Gate bridge is always very cold for a Brazilian like me and after having a pizza feast with some wine, somehow the idea of hiking with lenses, tripods and cameras didn’t sound very compelling.

Aline was always into photography, but I never got very excited about it. I don’t have “the eye” for taking great pictures, so it was never my first choice when travelling or meeting friends. “Pictures? No thanks”.

Then Everaldo introduced me to the technical photography. Hey, that’s interesting. Lighting? ISOs? Exposure? Aperture? Lenses? Custom Camera Firmwares? I’m in.

The problem is that, being a developer, sometimes photography frustrates me:

The world is always changing

Digital photography lets you replicate a very cool cycle from programming: Compile, test, debug. Except… you can’t really debug properly, since you can’t stop, well, the world.

Sometimes a picture that was supposed to be about a landscape gets photobombed by a bird:

I wish I could just:

_bird.alpha	= 0.0f;

No such luck.

People just don’t stand still

Being able to photograph on manual mode is great… except when you really want to capture something happening in almost-real time regarding some living thing.

Pair that with a manual focus lens like the Helios 44m-4 and then you have a problem.

Darling, please stand still. Daddy is almost done trying to figure out the best ISO to take this picture, and of course, make sure everything is focused. Oh… you moved. The depth of field is very narrow, you know? Lets start again!

Everytime I start mentioning the Exposure Triangle, I lose the subject, just like when I’m trying to explain BDD to someone. Damn.

Oh the UI

Adobe, about Lightroom? I feel that the CIFilter API is more cozy than your fancy custom weird dark UI.

Now I understand why so many people try to make photo apps. Nobody cracked this UI yet. And the in-camera UI? Oh god.

The “good old cameras” are coming back for a reason. Their interface is simple and straight to the point. For me to enable multiple exposures on my Canon 6D, I need to dwelve into 3 layers of bad UI. Need to change the aperture? Turn this weird dial that’s also a navigation keypad.

Where’s the HIG for cameras?

But you know what?

Back to that night: after hiking so much, the view was amazing. Seeing the city from so far away made it all worthwhile. And even with a very big racoon coming our way, showing its teeth, we didn’t flinch - not because we are brave or whatever… our cameras were exposing!

I learned how to contemplate the amazing views and places I was visiting while my camera was capturing them.

It’s amazing and highly frustrating. Just like life.

Pictures from my latest trip to California

The Rock by Eduardo Fonseca (ebf)) on 500px.com The Lighthouse by Eduardo Fonseca (ebf)) on 500px.com The Trail by Eduardo Fonseca (ebf)) on 500px.com

Can’t wait to get back. :)

Fix for a dropped Canon EF 50mm 1.8f

In my latest trip, during a long exposure shooting, a very nasty wind dropped my 60D to the ground. The damage was… hard to grasp1 (as you may imagine):

  • The camera got scratched2 on the front.
  • The lens hood, imploded.
  • The lens, exploded.

So, after I got home I did what every tinkerer has the obligation to do: Tried to fix the lens myself.

After googling for a bit, I found Dave’s blog, linking to a very handy file from Yosuke Bando that shows how to disassemble the lens.

The file is mirrored here.

I grabbed my tools3 and… well, the results are amazing! The lens came back to life and I’m a happy camper4.

  1. According to @_everaldo, war marks on cameras are cool. Way too painful for me, for now.

  2. Deeply. Oh the pain.

  3. Not you, Xcode.

  4. With a scarred camera. You can’t win everything :)

New blog

After seeing so many smart people migrating to Octopress, I decided to rebuild my old blog doing how the cool guys do nowadays: Octopress and Heroku.

But I decided against importing my old posts. I think they reflect another part of my life. And, like someone very smart once said:

Don’t overestimate the desire for a clean slate, Mr. Lampkin - Admiral Adama

So, new blog, new life. I plan to write about iOS and Android development around here. And since I’m rediscovering Ruby (I started with Ruby on 2000!), I think that will show up too.

So say we all1.

  1. I just finished watching BSG. :)

subscribe via RSS