Lightroom – export original in-camera JPGs

June 2, 2012

I’ve searched around a bit and could not find a way to filter a certain subset of photos and copy (export) the original JPGs from the camera, unmodified, into a new folder.

  • Export has an “Original” file format, which comes out as .CR2 for me, as I shoot RAW+JPG
  • Even if I shoot JPG in-camera, export to the “Original” format will add additional metadata, not providing the original camera JPG.

Here’s a hack solution: Export the files anyways, with modifications and all, just to use the filenames. Then use a script to copy those filenames from the original source’s folder to your destination folder.

If this was Linux, I could write a one-liner in my head:

for file in `ls /path/to/picked-set/*.jpg`; do cp /path/to/originals/$file /path/to/destination/; done

But this is Windows, so some quick Googling led me to my answer.

FOR /F "tokens=*" %G IN ('dir/b ^"\path\to\picked-set\*.jpg^"') DO copy "\path\to\originals\%G" \path\to\destination\

You can also add a “~n” between the “%G” (making it “%~nG”) to remove the file extension, in the case that you want to use the JPG filenames to copy the CR2/RAW files. Something like this:

FOR /F "tokens=*" %G IN ('dir/b ^"\path\to\picked-set\*.jpg^"') DO copy "\path\to\originals\%~nG.CR2" \path\to\destination\


Geolocation on mobile (smart)phones

April 4, 2012

Typically you have a GPS receiver, some sort of cell tower triangulation, and WiFi databases. There’s Assisted GPS as well, but for the most part, GPS and WiFi do most of the work.

However, this requires leaving your WiFi radio on. Who knows how much battery it uses scanning for networks when you’re not near one – that’s a separate story.

I keep WiFi on my phone all the time, wherever I am, regardless of whether or not I’m connected to a network (and of course I use it at work and home.) I use tons of geolocation services, and even some automatic ones. I care about location accuracy so I just leave it on. I’m tied to a desk most of the time so I’m not the type to cram in bits of battery life whenever I can.

But what about users who don’t use it as much, or don’t use WiFi? They turn WiFi off, which removes WiFi geolocation as an option when they do use a location service. Consequently, a fine location may take much longer to obtain, if at all. The savings of keeping WiFi off come at the cost of not being able to use WiFi for geolocation when you need it. Let’s try to compromise.

Here’s my proposal/idea: add a feature in the operating system (Android, in our case). When a user/app requests a fine location/coordinates, if the user has WiFi off, and location retrieval through WiFi on, automatically turn WiFi on to retrieve the location, and turn it back off. It should take about 2-4 seconds for each transaction, depending on driver and hardware. Of course, this could have potentially large battery/other consequences, and should be an option (I’m thinking on by default.)

Most people I know have WiFi off by default but WiFi geolocation enabled, which doesn’t help when WiFi is off – and they ask me why their location is so inaccurate. This could be a nice compromise.

foursquare’s move to OpenStreetMap – I don’t like it

March 6, 2012

Google Maps was a standard. As scary as it is to use Google for services, its implementation is much better for the masses. I don’t see why foursquare switched over, and it seems I’m not the only one.

Polling rates in music games

February 24, 2012

A common misconception about music game polling rates:

I still don’t understand why people think that overclocking USB ports help with IIDX. The game only polls for input every 60th of a second, and the USB polls for input every 250th of a second. There’s no reason to overclock your port; the USB port polls more than 4 times before the game updates itself.

I thought to myself… this can’t possibly be the case. I then tried to create some scenarios in my head which worked, but I knew that there was a timing where increasing the USB polling rate would help even though the game polls the USB buffer slowly. Many cycles need to pass before this happens, but it does (and it sure does happen often.)

Here was my response.

Even if IIDX polls the USB buffer at 60Hz (every 16.6ms) and USB polls your inputs at 125Hz (every 8ms), there can be a timing when the game polls the buffer after you hit the key, but before the key input has been placed into the buffer.

Classic example of a situation where increased polling rate would make no difference:
8ms: USB polls your input. sees nothing.
16ms: USB polls your input. sees nothing.
16.2ms: you hit a key
16.7ms: IIDX polls USB buffer. sees nothing.
24ms: USB polls your input. retrieves the input you hit at 16.2ms
33.3ms: IIDX polls USB buffer, retrieves the input you hit at 16.2ms, that the USB buffer retrieved at 24ms.
(Now whether or not this affects your game is a different story, and depends on the game.)

My guess is that increasing the USB polling rate from 125/250Hz to 1000Hz will make it so that your input will actually be stored in the PC’s buffer sooner after you hit it. It would make the accuracy fine enough to not worry about whether or not the PC has retrieved your input (so you only have to worry if the game polled the buffer or not)

1000Hz USB making no difference:
15ms: USB polls your input. sees nothing.
16ms: USB polls your input. sees nothing.
16.2ms: you hit a key
16.7ms: IIDX polls USB buffer. sees nothing.
17ms: USB polls your input. gets the key.
33.3ms: IIDX polls USB buffer, retrieves the input you hit at 16.2ms, that the USB buffer retrieved at 17ms.

1000Hz USB making a difference: (need several cycles before hitting this situation)
281ms: USB polls your input. sees nothing.
281.3ms: you hit a key
282ms: USB polls your input. retrieves the key.
283.3ms: IIDX polls USB buffer, retrieves the input you hit at 281.3ms, that the USB buffer retrieved at 283.3ms.

now take the previous situation but with 125Hz USB polling:
280ms: USB polls your input. sees nothing.
281.3ms: you hit a key
283.3ms: IIDX polls USB buffer. sees nothing.
288ms: USB polls your input. retrieves the key.
300ms: IIDX polls USB buffer, retrieves the input you hit at 281.3ms, that the USB buffer retrieved at 288ms.

so you hit the key at 281.3ms but IIDX doesn’t even see it until 300ms. if the polling rate was higher, it would’ve seen the input on its previous 60Hz poll, at 283.3ms.

when you have tons of these inputs over a long period of time, this situation can happen very often, which means polling rate can really make a difference. of course, again, whether it affects the game is a different story (timing windows of the game etc.)

disclaimer: i’m a programmer, and i’m less knowledgeable about music games than computers. I’ve only increased USB polling rate for my mouse for competitive CS and SC. there are a billion other factors to consider, including the kernel’s I/O scheduler, which can shift the delay.

Group Messaging: Huh?

February 24, 2012

Shit, I took 2-3 months to write this blog post. Since then I’ve done a handful of techy things worth blogging about.

Recently, I’ve seen surrounded by a swarm of messaging confusion, especially with the release of iOS 5. I’ve been so accustomed to traditional SMS, which costs nothing to network carriers, and has limitations expected of a 20-year-old technology. Sprint doesn’t offer tiered texting, and is free/required with a data plan (thus no need for text-over-data workarounds)

iOS5 introduced a new concept called iMessage. Apple knows whether or not the user on the other end is using an iOS device. If he/she is using an iOS device, the input field will show “iMessage” in a light shade. If not, it will show “Text Message” (or possibly other things I have not seen, as I cannot test this myself)

If the message is sent over iMessage, it is sent over data through Apple servers, bypassing the carrier. Most people pay for enough data to field this; these messages don’t take up much bandwidth anyways. However, it is arguable that one with an unlimited SMS/MMS plan may prefer to send messages through that channel instead of iMessage over a limited data plan. Generally this is not the case, because iMessage should not use enough data for you to even care.

I hate SMS. It’s limited to 160 characters, doesn’t support Unicode, takes up to 30 seconds for end-to-end transmission when working properly, and the carriers charge you for it when it costs them nothing (it’s carried inside a field of the packet that’s sent from your phone every so often anyhow.)

I also hate MMS. It requires you to be on 3G so if your smartphone is connected to WiFi, it usually falls back to 3G to receive the MMS, but I have seen cases where it doesn’t. And if you are in an area with good WiFi and no 3G, you’re SOL. I also use Google Voice full-time, which does not work with MMS (and does not return an error message if someone tries to send my GV number a MMS…)

I am a fan of third-party messaging apps like WhatsApp and KakaoTalk, for various reasons. These apps popularized the concept of device push messaging over data, long before Apple integrated it into their iOS. And that’s why iMessage is so popular.. it’s integrated into the OS. It’d be interesting if Google implemented something similar in Android, after all of the copying/stealing Apple has done.

iOS5 also introduced a concept called “Group Message”. This confused the shit out of me at first because of iMessage. At first I thought Group Message was just an iPhone/iMessage circlejerk. But I noticed that when this circlejerk included an Android phone (or, to be specific, a non-iOS5 phone) that some strange behavior happened.

The non-iOS5 phone received the messages as MMS. This made less sense to me. If an iPhone user sends a message to three other people – 2 iOS and 1 non-iOS, I thought 2 of those messages would be delivered over iMessage and the third over SMS. But that was not the case.. The last one was over MMS.

This is because MMS “extends” SMS capability by a bit. It is a completely different type of message, but it supports long messages, (obviously) media, and the recipient list is sent in the message. This is how phones recognize that a group conversation is happening: threading multiple MMS messages with the same group of recipients.

Now we have another problem. A phone can read that a MMS was sent to multiple recipients, but

  1. it usually doesn’t do anything to show the user that it happened.
  2. when a reply is composed, it may address it only to the original sender, via SMS, which breaks the chain.

The recipient data is in the MMS message, but dumbphones and many smartphones seem to ignore it. The native Messaging app for Android (as of 2.3, Gingerbread) ignores it. Handcent for Android recognizes it, but sends replies as SMS unless you attach a dummy image which would make it MMS/work “properly”.

My coworker looked into this issue for a while as he uses an Android phone and communicates with some iPhone users. For Verizon, there is an app called “Verizon Messages” which behaves properly with “Group MMS” threads when it sees them. I have yet to find solutions on other platforms, but I haven’t really searched.

Nitpicks: TweetCaster Pro for Android

November 2, 2011

I used the official Twitter for Android when I got my first Android device in February. A few months later, Amazon released TweetCaster Pro for free as its Free App Of The Day promotion.

I tried it, and liked having a “Jump to top” button (that is SOOO useful), and image previews. I use it very often, sometimes way more than others. Doing so, I found a few things that I’d like to see fixed.

Previews work.. but only in Timeline mode, or on someone’s profile page. If I stumble over the same tweet through Mentions or Thread, nothing will preview. Also, application handlers fail in a similar fashion. I use TweetCaster Pro to read TwitLonger links, as well as images from twitpic/yfrog/etc. It really cuts down on the time needed to access the desired content, by avoiding launching the browser and loading the bloat.

Similarly, a YouTube link will properly launch into the Android YouTube app, but not if I click the link through Mentions or Thread.

I’m not reproducing this behavior at the moment; this is all coming from my memory, so I might be slightly off, but the issue is definitely there.

There has been a rising frequency, over the past month or two, of connection errors while loading tweets or threads. I don’t know if this is a TweetCaster problem or Twitter API issue, but it has happened to me on various Wi-Fi, 3G, and 4G networks.

I’m a location whore, so I enjoy adding my current coordinates to my tweet (as long as it’s not exposing private information — generally the location of my home.) However, when I enable it with TweetCaster, it doesn’t tell me if it successfully obtained a position or not, so I have no idea if it got added, and I don’t know how long to wait. An even nicer feature would be to show me a preview of the detected location, to make sure I didn’t end up with something absurd from Wi-Fi geocaching.

I miss being able to open one tweet. Sometimes I want to show a tweet to a friend, but I have to show them a timeline of tweets and say “look at this one” or cover the others with my hand.

Someday there will be an API to see replies to the currently viewed tweet. AFAIK only the web twitter client (including supports this.

Overall, I love TweetCaster Pro, it has tons of features, support for a bunch of features that I don’t use, and options to customize things just the way you want it.

Edit: I like how a release came a few days after, which allowed seeing replies to a tweet.

Connecting a HDMI laptop/computer to a DVI monitor

September 17, 2011

Most of my “HDMI-DVI” adapters are female HDMI to male DVI. They’re intended to convert a DVI port on a PC to a HDMI port, allowing you to connect your HDMI monitor to “DVI” on the PC. However, what I wanted to do was a little bit backwards.

I wanted to connect a HDMI laptop/computer (common output, nowadays) to a monitor without HDMI, but with DVI. I knew that with a standard $1 male-male HDMI cable, I could connect the computer to the adapter. However, the other end of the adapter has pins for DVI-I dual link, but the monitor’s female end shows DVI-D dual link.

I looked at the pins on Wikipedia¬†and deemed the side pins a bit unnecessary (“don’t care”), as they were for analog communication. I pulled the 4 square pins (C1-C5), but the plug still wouldn’t fit. I realized that the flat connector (C5) was a bit longer on the converter than the DVI cable I was using; I saw that it was analog as well, and pulled it out. and then it worked! I was able to hack my $2 HDMI-DVI adapter to work not as intended, given a few minutes of research and understanding. Of course, one could buy the correct converter from Monoprice, but shipping costs tend to be too much for one item, and sometimes you need a hack in the moment, instead of waiting a few days for shipping.

A year ago, I understood DVI-D vs DVI-I on video cards and how the DVI connector on video cards supported extra pins for analog input, and could be used with VGA monitors given a cheap, passive DVI-VGA converter found with every video card nowadays. Now I understand it just a bit more.

Why I can’t use WebOS

September 8, 2011

I’ve used it for 2 years since obtaining my Palm Pre. In general, there is not too much available on WebOS compared to Android, and development to fix that is very slow.

I moved to an Android phone 7 months ago after being fed up with the lack of Google Voice integration, and Sprint ironically released their integration with Google Voice a month later. I also recently obtained a TouchPad since it was $100, even though I did not need a tablet whatsoever (I’m a phone + netbook + desktop kinda guy; no tablets or laptops.)

My list was scrambled not too long ago, so it’s probably missing things, but without further ado..

TouchPad severe:

  • Maps – Bing…
  • No camera app WTF!
  • No GPS (TouchPad only), so no navigation

WebOS severe:

  • No real input method support
  • No widgets
  • No polished Twitter app; a few have existed here and there but no outstanding ones
  • No application handlers!
  • Email – replies are made with a weird font, makes me not want to use it altogether
  • The obvious – limited app selection. I’m sick of no Google Voice, no Google Reader, no VNC client, weak terminal/ssh clients, etc.

TouchPad mehs:

  • Kinda slow at stock 1.2GHz, fine with 1.5 UberKernel
  • YouTube is not native. They want you to just use the YouTube webpage, which isn’t exactly bad, but isn’t good at all. You try to do desktop-style mouseover to fullscreen or change volume, and it doesn’t work well on the TouchPad. Also, videos have trouble playing at higher resolutions.

WebOS mehs:

  • No scrobbler. Must use music player with integrated scrobbling

WebOS awesome:

  • Card view
  • Best developer community yet worst app selection
  • Homebrew FTW (also FTL, we live in an era where everything should just work)

Since it’s 2011, everyone expects their technology to just work, and for some reason they all think that you should never have to wait for anything. While this is a somewhat reasonable expectation, they shouldn’t be surprised when it doesn’t perform “like a Mac” or “like an iPad”… that is, a device that can’t really do anything, but does it (fairly) well.

Thunderbird 5 GUI slowdown

August 12, 2011

I recently upgraded Thunderbird from 3 to 5. The graphical interface has a bit of a redesign, but it is horribly slow and unusable. Sad, because speed is the reason that I choose Thunderbird.

After googling for a bit, I found people messing with gfx settings so I decided to try. A few combinations of things didn’t work, and I eventually found something that did work.

So here’s my (non-default) option settings:
I bet one of those isn’t necessary.

It’s fairly snappy again!

Multiboot FreeBSD

July 12, 2011

I have a FAT32 partition with my GRUB, stages, menu.lst, etc. and I have two installs of FreeBSD on their own primary partitions afterwards. Ideally, I would just chainload each partition, and the first boot sector on the partition would handle booting the operating system on that partition. But FreeBSD (and Solaris) are a bit different.

FreeBSD makes it simple enough. All you need to do is root to the first subpartition of the FreeBSD partition and call boot kernel /boot/loader.

However, GRUB needs to be able to read UFS2 in order to load /boot/loader from that UFS2 filesystem, otherwise partition type 0xa5 will be “unknown”. I updated GRUB on the FAT32 partition to support UFS2 stage1.5, among some other strange filesystem types.

I then reinstalled the FAT32 copy of GRUB to the MBR with the usual install (hd0,0)/boot/grub/stage1 (hd0) (hd0,0)/boot/grub/stage2 p (hd0,0)/boot/grub/menu.lst, and voila, the MBR/booted GRUB could read UFS2.

Here’s a sample of my finished menu.lst:
[root@saratoga16a ~]# cat /mnt/fat32/boot/grub/menu.lst
default 0
timeout 8
title FreeBSD 7.4 i386
    root (hd0,1a)
    kernel /boot/loader
title FreeBSD 7.4 amd64
    root (hd0,2a)
    kernel /boot/loader
title Windows
    root (hd0,0)
    chainloader +1
[root@saratoga16a ~]#