Raspian: Configuring for editing Python with vim

 Python  Comments Off on Raspian: Configuring for editing Python with vim
Mar 312016
 

We have a new Raspberry Pi 3 in the house and I’ve been having some fun getting it setup to edit Python with customized vim. You can see my dotfiles here.

So in a nutshell, I like using tmux + vim + plugins and customizations. The following are some snags I ran into and how to get them sorted.

To get tmux and vim with Python enabled for plugins:

sudo apt-get install tmux vim-nox

I use syncthing to keep my dotfiles in sync on the different systems I develop on.

cd ~/sync/common/config/dotfiles
./updot.sh
cd ~
vim +PluginInstall

Before trying to build YouCompleteMe you will need to increase your swapfile size otherwise you’ll die a strange an convoluted death.

sudo vim /etc/dphys-swapfile  
#edit this line
#CONF_SWAPFILE=100
CONF_SWAPFILE=1000  

Then reboot. switch to

.vim/bundle/YouCompleteMe

run

./install.py

I returned the CONF_SWAPFILE back afterwards.

sudo apt-get install python-virtualenv

To get the requests package to run without the Insecure Platform you’ll need to install some supporting libs

sudo apt-get install libffi-dev libssl-dev python2.7-dev

Finally, the default /etc/vim/vimrc has syntax highlighting commented out. So either uncomment it, or as in my case, update your .vimrc to enable highlighting. — First platform that I have had to do this on. It won’t hurt my other installs so I kept it in my own dotfiles.

 Posted by at 11:56 pm

Birthdays, Ramanujan but in the end it’s just Python.

 Python  Comments Off on Birthdays, Ramanujan but in the end it’s just Python.
Feb 102016
 

It’s Sunday morning and I get some time to myself, so I’m listening to some blues and catching up on my newsfeeds when I come across this interesting article on calculating what size group of people would be necessary to have a 50/50 chance of two of them sharing the same birthday.

General birthday problem

The primary focus of the article is on this equation to calculate the probability of uniqueness given a sample size of r from a group of N things to choose.

$$p = \frac {N!} {N^r (N-r)!}$$

In the article, he is concerned with the possibility of overflow with the size of the factorials involved and since scipy doesn’t have log factorial he implemented his solution with gamma log — see above article for his code. So I say to myself, “Wonder if I can do this with just straight Python?”

A quick google on log factorial found this approximation of log factorial by Srinivasa Ramanujan on math.stackexchange If you have not heard of Ramanujan before — stop and google him immediately. Wow!

$$\log n! \approx n\log n-n+\frac{\log(n(1+4n(1+2n)))}{6}+\frac{\log(\pi)}{2}$$

Which works out to the following CPython code:

from math import log, pi, exp

def logn_factorial(n):
    """return an approximation of log n! using Ramanujan's equation."""
    return n * log(n) - n + (log(n * (1 + 4*n * (1 + 2*n)))/6) + (log(pi)/2)

p = exp(logn_factorial(N) - logn_factorial(N - r) - r*log(N))

Ok, so no scipy needed to follow along with this article plus I get to use some very cool math. I like Sunday morning fun time. However, I then start thinking, “overflow”, hmmmm. What is the upper bound of math.factorial anyway. Since it’s the birthday problem lets see what happens with 365

>>> import math
>>> math.factorial(365)
25104128675558732292929443748812027705165520269876079766872595193901106138220937419666018009000254169376172314360982328660708071123369979853445367910653872383599704355532740937678091491429440864316046925074510134847025546014098005907965541041195496105311886173373435145517193282760847755882291690213539123479186274701519396808504940722607033001246328398800550487427999876690416973437861078185344667966871511049653888130136836199010529180056125844549488648617682915826347564148990984138067809999604687488146734837340699359838791124995957584538873616661533093253551256845056046388738129702951381151861413688922986510005440943943014699244112555755279140760492764253740250410391056421979003289600000000000000000000000000000000000000000000000000000000000000000000000000000000000000000

Ok, let’s dial it up by a factor of 1000.

>>> math.factorial(365000)

Well it took a bit but it ran, with over 40 screens of numbers. Python is still going strong, so what is the upper bound of math.factorial? A search brought me here http://bugs.python.org/issue8692 and specifically this message. Which means that the max size of the result can not exceed sys.maxsize - 1 digits, or on a 64bit platform, 2**63 – 1 digits of capability. Thanks to some dedicated individuals who seemed to be having as much fun as I was, math.factorial is up to the task.

The take-away from this article is not the cool math, or the approximations of log n! — it is…

Don’t underestimate the power of Python.

Try straight Python before you move on to something more complex. The approximations of $\log n!$ were unnecessary. All that was needed was just Python and we get the following implementation of the Probability of Uniqueness:

from math import factorial

p = factorial(N) / ((N**r) * factorial(N-r))

A simple and straightforward implementation of the equation.

* A quick note, I’m usually a stickler for good variable names, however, when working the math equations I stick as close as possible to the equation that I am implementing.

* Also, if you are working on big enough numbers and I mean huge numbers, then approximations might be needed but by then you are going to be working at the limits of what a 64bit platform can do.

Premature optimizations and all that, when was the last time you fell in to the trap of implementing something to handle a misconceived belief of a shortcoming in Python or one of the standard libs?

 Posted by at 3:14 am
Feb 032016
 

Since Python is call-by-object(*1), a function that mutates a mutable argument, changes that object in the caller’s scope. some code to illustrate:

>>> mobject = {'magic': True}
>>> id(mobject)
140330485577440
>>> 
>>> def toggle(mdata):
...    '''flip the state of magic'''
...    mdata['magic'] = not mdata['magic']
... 
>>> 
>>> toggle(mobject)
>>> mobject
{'magic': False}
>>> id(mobject)
140330485577440

So hopefully this does not surprise you. If it does, please see the two links in the footnotes(*1) as they explain it quiet well.

My question deals with the implicit nature of the mutation. Not that Python is somehow wrong, it is the fact that the function usage does not convey the mutation to the reader as pointedly as I want. Coming from other languages that are call by value, a function that wanted to mutate an argument and get it back into the caller’s scope had to return the mutated value.

>>> def toggle_explicit(mdata):                                                 
...    '''flip the state of magic and return'''                                
...    mdata['magic'] = not mdata['magic']
...    return mdata
... 
>>> mobject = toggle_explicit(mobject)
>>> mobject
{'magic': True}
>>> id(mobject)
140330485577440
>>> 

Now I know that the above code is most definitely NOT call by value, but I do feel that it is more explicit about my intention. Even though the assignment is not needed for the actual effect. i.e.:

>>> toggle_explicit(mobject)
{'magic': False}
>>> mobject
{'magic': False}

So why does toggle_explicit() give me the warm fuzzies? Where as toggle() requires the reader to know what is going on. Is it just me shaking the cruft of call-by-value languages off? What do you do, when you mutate state within a function? Is the toggle_explicit() form not/less Pythonic?

— Footnotes —

(*1) Raymond Hettinger in this SO article references a post by Fredrik Lundh’s on “Call By Object

 Posted by at 3:14 am
Jan 272016
 

In a recent post Amit talked about temporary files and gave a number of scenarios where they can be quite handy. In testing, I occasionally need temporary files and prefer to use mkstemp however, the clean up of the file was bothersome and I found that I often needed to write something into the files before the test.

from dhp.test import tempfile_containing

contents = 'I will not buy this record, it is scratched.'
with tempfile_containing(contents) as fname:
    do_something(fname)

Thus tempfile_containing was my solution. It uses mkstemp, writes contents to the file and is implemented as a context manager so it returns a path and file name that is cleaned up when it goes out of scope. (Note: NamedTemporary and others, return a file-like object tempfile_containing returns a path/filename of an existing file) After the file is written to it is closed so no inadvertent lock contentions occur on some OSs. If it sounds like something that would make writing tests faster for you, check it out at:

* documentation
* Source
* pip install dhp

 Posted by at 3:14 am
Jan 202016
 

Comments as defined by Python in 2.7.x and 3.4.x

A comment starts with a hash character (#) that is not part of a string literal, and ends at the end of the physical line. A comment signifies the end of the logical line unless the implicit line joining rules are invoked. Comments are ignored by the syntax; they are not tokens.

Seems pretty straight forward to me. Parser finds a comment token, and ignores everything to the new-line token. (Except for when implicit line joining rules are invoked.) So why was I struck with a hmmm, when I opened up a Python Console and entered a comment then pressed Enter?

>>> # this is a comment, guess what happens next?
...

If you look closely, the next line is prefixed with an ellipsis (…) and not a new prompt (>>>). Well there must be a reason, but this feels “unexpected.” So I pull up a 2.7 console and try it again.

>>> # this is a comment, guess what happens next?
...

So it looks to be intentional, although it doesn’t seem to feel correct. Ok, let’s give pypy a try and see what happens there.

>>>> # this is a comment, guess what happens next?
>>>>

Ok, now I am confused. CPython treats it as an unclosed statement of some kind, although you would think it would be closed because a new-line token should have been encountered thus closing off the comment. However, when pypy gave a different and less surprising result, I put a hand to my chin and said, “Hmmm.”

Ok interweb – does anyone know what is going one here?

1) Why the unexpected ellipsis in CPython ?
2) Why does pypy not return an ellipsis ?
3) Are they both correct and are just slightly different implementations of the same reference? Or is one more correct than the other?

While I’m scratching my head, you can ponder this:

>>> # comment
... 2 + 2
4
>>> 
 Posted by at 3:14 am
Jan 132016
 

A search for information on string interpolation in Python will inevitably lead you to comments and links to old documentation that the string modulo operator is going to be deprecated and removed. However, that is just outright FUD. I need not make a case for the modulo operator, I’ll just let the code do the talking.

from timeit import timeit

def test_modulo():
    'Don\'t %s, I\'m the %s.' % ('worry', 'Doctor')

def test_format_explicit():
    'Don\'t {0}, I\'m the {1}.'.format('worry', 'Doctor')

def test_format_implicit():
    'Don\'t {}, I\'m the {}.'.format('worry', 'Doctor')

timeit(stmt=test_modulo, number=1000000)
timeit(stmt=test_format_explicit, number=1000000)
timeit(stmt=test_format_implicit, number=1000000)

Running the code on python 3.4.3 I get the following results:

>>> timeit(stmt=test_modulo, number=1000000)
0.668234416982159
>>> timeit(stmt=test_format_explicit, number=1000000)
0.9450872899033129
>>> timeit(stmt=test_format_implicit, number=1000000)
0.8761067320592701

Note that test_format_explicit is the form most commonly found on the web. However, the implicit version is a much closer equivalent to test_modulo. In this case, there is an apparent price for being explicit.

Until .format is on par, speed-wise, with % there is no chance of it being deprecated. I support .format‘s existance, in some battle grounds it is superior. You shouldn’t bring regex‘s to the fight when .starstwith, .find, .endswith or in can handle the challenge cleanly. The same is True for .format and %.

An as PEP 461 demonstrates, the string modulo operator is not going quietly into the night.

This post was inspired by curiosity after reading this 2013 article.

 Posted by at 3:14 am
Jan 062016
 

I personally don’t like Python’s type annotations, the completely mask out what was a human friendly function signature. For something that was being proposed and skunk-worked in to stub files, according to Guido’s keynote at PyCon, it is spreading inside of source files, at an alarming rate, through out Python’s upper echelon.

Just this weekend Type Annotations infested a blog post on Why print is now a function in Python 3. Luckily Brett, took out the hedge trimmers and cleared away the mess. It is a interesting blog post now that you can see what he is talking about. Now I see that they’ve spread to PEP8.

The usefulness of Static Type Checking against Dynamic code is questionable at best. The damage that can and may be done to readable function signatures is frightening on scale similar to the Kudzu infestation and decimation of indigenous plants in the US. I encourage you to help fight this invasive and damaging trend by keeping your Type Annotations where they belong, in stub files.

Why doesn’t PEP8 encourage the use of stub files over obfuscating your code? A considerable number of people don’t like lint droppings ( # pylint:disable=… ) in their code and it goes in a comment, this trash is being put right in the function’s signature.

What is the practical upside to Static Type Checking? Guido talked about it in big terms and hand waving, in his keynote and people use glowing buzz words, but, seriously with examples, what can it actually do that writing testable code and tests can’t? The upside is as hard to see as a type annotated signature.

 Posted by at 6:13 pm

Scribbler 2 Robot + Fluke + Myro

 Python, robots, Ubuntu  Comments Off on Scribbler 2 Robot + Fluke + Myro
Nov 192015
 

Last night I gave a presentation on Robots and Python at the Omaha Python User’s Group meeting.

I’ve decided to lend out my robot to other group members who are interested in the topic. I am going to document how to get the Robot set up and create an environment to interact with it. It had been a while since I last used the robot (python2.4 or so) and I had to do a few things to get things fixed up with the current version of python and the supporting packages. I got a lot of help from this article but I am going to condense that information to what needs to be done on a linux platform.

NOTE: These instructions are for python2.7, I’ve read that python3.x is problematic, although I’ve not tried.

Software Environment

  1. First setup a virtual environment
  2. virtualenv robot

  3. Now change to the directory and activate the virtual environment
  4. cd robot; source bin/activate

  5. Now install the dependencies
  6. pip install numpy pyserial Pillow

  7. Now checkout the latest myro source
  8. svn co http://svn.cs.brynmawr.edu/Myro/trunk myro

  9. change into the myro/myro subdirectory and edit the graphics.py file. You will need to change the line import ImageTk to from PIL import ImageTk
  10. Now change back up one directory and run the setup for the myro library
  11. python setup.py install

  12. now go back one more directory so you are in robot

Bluetooth setup

My laptop didn’t have Bluetooth built-in so I used a dongle. Do what you need to do and open up your Bluetooth manager, then turn on the robot (with the fluke board attached). Robot requires 6 AA batteries and will run fine with rechargeable batteries if you have them. Install them in bottom compartment. Install fluke board by mating it to the rs-232 connector on the top of the S2. Power switch is a black slider by the comm port.

Look in your bluetooth manager for a device that has IPRE in the string. Pair with it and use the following code. 1234 Make a note of the device that is set up. On my rig it was /dev/rfcomm0
NOTE: on mine the /dev/rfcomm0 was root:dialout but I couldn’t access it. I was too lazy to check my groups so I just pulled out the hammer and hit it with
sudo chmod 666 /dev/rfcomm0

Ready to Test

Inside your virtualenv, fire up python and enter the following:

from myro import *
initialize('/dev/rfcomm0')

repeat the initialize command until your are successfully connected. You’ll hear the beeps once you connect. Now lets test it. Assuming we are starting where we left off from above. (If not, start the python interpreter up again and issue the import and initialize commands from above.)

forward(1, 1)

See the manual for a list of commands or just dir() and help(interestingCommand)

If you run into any issues please let me know. I am going to keep this post updated so as others borrow the robot they’ll have some up to date instructions to get them started. If you are a member of the Omaha Python Group and would like to arrange to borrow the robot, please contact me.

I’ll post some robot code in future posts.
Have Fun!

 Posted by at 8:07 pm

pypi: setup.py, keeping a DRY long_description

 Python  Comments Off on pypi: setup.py, keeping a DRY long_description
May 172014
 

I like the idea of listing changes to my distribution in the long_desciption in setup.py. So a release a go, I started appending docs/changes.rst to my README.rst file that I am using for pypi. It was a simple doc with bulleted lists. The world was good.

    with open('README.rst') as h_rst:
        LONG_DESCRIPTION = h_rst.read()
    
    with open('docs/changes.rst') as h_rst:
        LONG_DESCRIPTION += h_rst.read()

However, I grew unsatisified with my changes.rst file and wanted it to link, ala cross-references to the documentation, so when reading the docs a user could quickly go see the docs for that item. For example:


* added “preserve_mtime“ parameter to :meth:`.put`, optionally updates the remote file’s st_mtime to match the local file.

Sphinx liked it, I liked it.
However, python setup.py check --restructuredtext --strict gagged when it saw it, so would PyPi.

Harumph! I mutter to myself. I want it all, I want Don’t Repeat Yourself(DRY), I want my change log to display on PyPi, I want cross-references in my docs. However, cross-references don’t make sense in the long_description, what they link to isn’t there. I do not want to update changes in two different places, I am already vexed with making sure that just the one document is updated. After all, who likes writing docs more than writing code?

Well, how do I get the cross-references scrubbed out for the long_description? Here is what I coded up:

with open('README.rst') as h_rst:
    LONG_DESCRIPTION = h_rst.read()

with open('docs/changes.rst') as h_rst:
    BUF = h_rst.read()
    BUF = BUF.replace('``', '$')        # protect existing code markers
    for xref in [':meth:', ':attr:', ':class:', ':func:']:
        BUF = BUF.replace(xref, '')     # remove xrefs
    BUF = BUF.replace('`', '``')        # replace refs with code markers
    BUF = BUF.replace('$', '``')        # restore existing code markers
LONG_DESCRIPTION += BUF

It is a Decorate, Scrub, Transform, Undecorate kind of pattern. Stripping out the :roles: tags left the single ` markers. So, if I wanted to change those to ``, I have to hide all of the existing ``, scrub, transform, and then unhide the original ``. And so that is what I did.

I imagine a determined individual could create a regex to cover the entire breadth of sphinx directives and make a sphinx_to_pypi converter. But for me, my itch is scratched. Maybe it will help someone else too.

Huzzah!
Here is the resulting long_description on pypi and the changes doc on pysftp.rtfd.org

Who else is wrangling long_description from RestructuredText documentation? What are you doing?

 Posted by at 1:04 pm
May 042014
 

0.1.3, 0.1.4, 0.1.5 – The Lost Versions

2014-05-05, I am updating this post to give the short answer: use python setup.py register to update your meta-data on pypi. It can be run repeatedly and will modify the meta-data on pypi for the distribution.

and now the original blog post…

During the first launch of YamJam, I went through a series of releases because of rendering issues of README.rst on pypi. WAT!, I say to myself and a few other choice words. I had just created 6 pages of rst, that compiled just fine for my sphinx generated documentation. I had cut and pasted the top portion of my index.rst with some text edits — what can be going on? I uploaded my package with twine – no errors. pypi was seeming to say, everything is peachy, then turning its back and mumbling FU. (Things should not fail silently.) I pulled up my shiny new package on pypi and was met with unseemly, unformatted text instead of a spiffy display.

I google for answers, I review other dists in pypi, pulling up their repos and reading through their readmes — Aha! I say to myself, I don’t see anyone else using an :alt: on their build status badge. I remove mine, go through the release procedures again, I upload and was slapped down yet again.

I grow angry and frustrated, I recheck my readme, I google for issues related. I find mention of “run the docutils command on it”, but no mention of what command. I review even more dists on pypi, seeing other broken readme renderings. I am unsure of what to do. I think — “It must be my .. code:: python, I see lots of :: , I rip out mine and replace with ::. I try again and FAIL.

I reread the rst, I put it through sphinx just by itself and see a warning about duplicate references. I had `view <url1>`_ for docs and `view <url2>`_ . One of those little edits, I mentioned earlier. You think, looks like an href, smells like an href — but no, it is different. That did it, 3 revisions later, I’m happy with the display of my readme on pypi.

It was a day later, while enhancing my release and CI scripts that I found the python setup.py check and python setup.py check --restructuredtext commands. I tested it locally, and sure enough, it would “warn” but not set an return code, so my release script couldn’t detect the failure. I figure, that messing up your pypi page, should equal FAIL not warn. Ok, so I’ll submit a patch to make it ERROR and set a non-Zero return code. I find the code repos and start reading through the code and discover an option I hadn’t seen. -s, strict. That will cause it to FAIL and set a non-Zero return code. So off to the CI script I go, I add the strict option and the test passes when it should have failed. If you don’t have docutils installed, and I didn’t on my CI, it just returns the same response as if it had passed. FACE PALM. pip install docutils

As it stands now, I can detect rst that will cause pypi to fail silently so I am good in that regard, and you should be to, now that you have read this.

Sidebar: why dosen’t setup.py upload and twine upload automatically run the checks supplied by setup.py in their strictest mode? Fail early, the cost is less

Which brings me to unnecessary binding. Why is the description on pypi so tightly and unnecessarily bound to a distribution release. Forcing a new release to fix render problems and typos? Pypi will let us upload packages but not let us edit the description in a web interface WITHOUT having to do an entire re-release? Use the readme as a starting point, let us edit without re-releasing.

Alex G, if you happen to read this — please make this possible on warehouse. Mr Gaynor, Tear Down this binding! Also, Give us download stats, with as much info as possible so we can weed out mirror requests.

Anyone who is thinking, “I’m going to rewrite pip because of X, DON’T” We have had too many installers, to many distutils and setuptools. setup.py is a cacophony of knobs, buttons, dials, many fighting each other. “There should be one– and preferably only one –obvious way to do it.” Unless it has to do with setup.py, then it should be as confusing as possible. There needs to be a pypi/setup.py BDFaW (Benevolent Dictator For a While). If the new solution doesn’t solve the current problem, or creates new problems, it is not a solution, it is just change for the sake of change. Eggs and wheels – harumph, I say.

I enjoy writing code in Python, I endure creating a release.

The world can and should be a better place.

Post PyCon momentum and drone.io

 Python  Comments Off on Post PyCon momentum and drone.io
May 012014
 

Since I didn’t get to attend this year, again, I’ve been watching the PyCon videos (thanks pyvideo.org and PyCon!) Out of all the video’s, Carl Meyer’s talk on “Set your code free…”, struck a nerve.

I have a project, YamJam, that I have been using since 2009. The main idea is a framework that allows you to factor out sensitive data from your code before you upload to a 3rd party repos. I’ve got internal and external projects that have been using it for quite a while and it makes these refactorings a breeze. I have other open source projects that get a lot more attention but yet have a niche audience, so I am experimenting to see if it is the lack of documentation and being a conforming pypi dist that is limiting it’s appeal. Yamjam should be popular, because it can scratch an itch caused by Djano. That itch being, “What do you do with settings.py?” It’s got lots of sensitive data that shouldn’t be checked into to a repos but it also has a lot of code that should. It also makes deployment between dev, staging and production easy to do with a checkout.

To that end, I’ve been creating a proper and complete test suite (was doctests) using py.test and tox, Continuous Integration with drone.io, documentation via readthedocs.org and sphinx, spiffing up the setup and dist with the latest distutils and uploading with twine instead of setup.py upload.

Through this whole process, I’ve had a lot of new experiences that I am going to be blogging about in the upcoming weeks. Things I like, really like, things that are annoying and some things that are counter to my way of thinking. During this process I’ve also been filing bug reports when I’ve encountered them and sending out feedback for improvements along the way.

After moving my code from subversion on google code to mercurial on bitbucket (hg convert), I started looking for a CI service to use. Off the bat, I looked at Travis-ci, but unfortunately, travis is a github snob. If you are not hosting your code on github or mirroring off of github then travis-ci is not an option. Some google searching showed that pylint (a tool I like very much and use) moved from internal tools to bitbucket and http://drone.io/ . So off to drone I go.

http://drone.io/ took about 10 minutes from signing in via my bitbucket account to running my first integration. When I learned that drone.io allows you to view the settings for other open source projects build environments, I had python 27, 32, 33 and 34 tests running via tox less than an hour later. What higher praise could I give a service than saying from 0 to testing in 10 minutes? I really, really like drone.io and recommend that you check them out. Unlike Travis-ci, drone.io supports bitbucket, github and google code. Options, I like. It is the same reason I prefer bitbucket to github. Bitbucket supports mercurial and git. I use both dvcs systems. I prefer mercurial. I like that bitbucket allows me to make the choice. Choice makes me happy. Check out my setup on drone.

More to follow.

A date with JSON?

 Python  Comments Off on A date with JSON?
Dec 012011
 

I’ve run in to this situation a few times and end up having to query my gBrain for the answer. When using json as a transport from python to html/javascript, I frequently end up needing to move date and time data. However, the builtin json module is not happy when you ask it to serialize a date/time object.

>>> json.dumps(datetime.datetime.now())
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/lib/python2.6/json/__init__.py", line 230, in dumps
    return _default_encoder.encode(obj)
  File "/usr/lib/python2.6/json/encoder.py", line 367, in encode
    chunks = list(self.iterencode(o))
  File "/usr/lib/python2.6/json/encoder.py", line 317, in _iterencode
    for chunk in self._iterencode_default(o, markers):
  File "/usr/lib/python2.6/json/encoder.py", line 323, in _iterencode_default
    newobj = self.default(o)
  File "/usr/lib/python2.6/json/encoder.py", line 344, in default
    raise TypeError(repr(o) + " is not JSON serializable")
TypeError: datetime.datetime(2011, 12, 1, 0, 50, 53, 152215) is not JSON serializable

So this means we need to figure out a work around. The trick is to let the json module know what it should do with a date/time object while leaving the rest of it in place. So, no replacing the default handler.

What we need to do is subclass the JSONEncoder and override the default method

class JSONEncoder(json.JSONEncoder):
    def default(self, obj):
        if hasattr(obj, 'isoformat'): #handles both date and datetime objects
            return obj.isoformat()
        else:
            return json.JSONEncoder.default(self, obj)

Using the hasattr and looking for ‘isoformat’ method will allow this to handle both date objects and datetime objects. So all that is left to demonstrate is how to put it together with the json.dumps method.

>>> json.dumps(datetime.datetime.now(), cls=JSONEncoder)
'"2011-12-01T00:58:34.479929"'

Ok, so now you have this ISO formatted string containing date information, how do you get that converted to a javascript date object once it has been transmitted across the abyss and now resides in a javscript callback function?

var d = new Date("2011-12-01T00:58:34.479929");

Happy Data Slinging!

 Posted by at 1:22 am

Getting more out of android.py

 Android, Python  Comments Off on Getting more out of android.py
Nov 292011
 

So, I’m hacking on a Python for Android project, which is built over top of the SL4A project. I’m currently using the remote method of development where you fire up an interpreter and share it in public mode. Then you import android.py and instantiate an instance of Android with IP and Port information of your public server. You can then hack in your favorite editor on a laptop instead of using a thumb-board or other. It looks like this:

import android  # The SL4A android.py module should be on your sys.path.

ip = '192.168.x.xxxx'
port = 35766
droid = android.Android((ip, port))

What happens is the __getattr__ method of the Android object uses magic to change droid.getenvironment() into an RPC call to the public server and then return the result back as a named tuple. Nice. Being the nosey bugger that I tend to be, I modified the code to add a debug param to the __init__ method, that when set, would print out what was being sent out over RPC and then the raw tuple results. A snippet of the modification goes like this:

  def __getattr__(self, name):
    def rpc_call(*args):
      if self._debug:
          print "droid.%s%s" % (name, str(args))
      res = self._rpc(name, *args)
      if self._debug:
          print "\t%s" % str(res)
      return res
    return rpc_call

You can easily see where I put my “if self._debug” logic in place. Now if I use my modified android.py I can turn on the debug flag and get some 411 on the magic that is going on. It ends up looking like this:

droid.eventWait(3000,)
	Result(id=28, result=None, error=None)
droid.eventWait(3000,)
	Result(id=29, result={u'data': u'@end', u'name': u'dmt:fromClient.speak', u'time': 1322542655069000L}, error=None)
droid.wakeLockRelease()
	Result(id=30, result=None, error=None)
 Posted by at 12:49 am

SL4A, Python, webViewShow – a faster dev mode

 Android, Python  Comments Off on SL4A, Python, webViewShow – a faster dev mode
Nov 262011
 

While I was playing around with Python for Andoid, I was using the webViewShow method to load an interactive html page and set up message passing both from

html/js -> python

and from

python -> html/js

Part of this requires that I knock out and hack some html/js code. However, I am using the Remote method with a public server on my android device, since I am too lazy to set up eclipse and a full blown android dev env. The example code they show, uses an html file located on the sdcard of the device. Of course this brings its own problems, since now I have to mount, edit, unmount between each hack cycle. Ick. Well I could just tell it to load the html from an off device server, but being lazy ( I think I mentioned that already. ) I didn’t want to rsync back and forth to my remote server, set up directories, etc. Also, I didn’t want to set up a Django install just to serve a hacky html script.

So I think to myself, man there has to be some light-weight way to serve this up locally while I’m hacking. So I think, hey CherryPy, but then I remember Edna and it hits me, I can serve static pages out of a directory with just python. A little google-fu and this page appears, giving just the needed incantation.

 python -m SimpleHTTPServer 8080

Which happily serves everything out of the directories below it on the specified port (8080 in this case). I make a little adjustment in to my webViewShow and change it from file:/// to http://my-dev-ip:8080/thefile.html and all is good with the world. As I hack, changes to the html are pulled and served immediately.

Did I mention, Python Rocks!

 Posted by at 1:13 am

Google App Engine — Auto-Increment vs. UUIDs

 Google App Engine, Python  Comments Off on Google App Engine — Auto-Increment vs. UUIDs
Apr 272008
 

App Engine is a pretty dramatic thought departure for lots of programmers who are used to writing an app that runs on a single server and access a single database.  Case in point, there has been a recurring topic of auto-increment fields on the  App Engine list — people trying to implement their own version of it since it is not a native datastore type.

Using an auto-increment field is not the way to go.  It is viable when you only have 1 database but the datastore for your app is going to/can be replicated out to other machines.  This would mean that their exists times, when datastore’ != datastore” — over time datastore’ would be sync’d with datastore” so that datastore’ == datastore”   — this would lead one to believe that there will be times when the idea of an auto-increment field will not be synchronizable or that the result of the synchronization would be less than satisfactory.  My belief that auto-increment fields are the wrong idea in this environment is strengthened by the fact that they are not offered as an intrinsic datatype.

The way to go, in my opinion, is to use UUIDs. (see links below)
  http://docs.python.org/lib/module-uuid.html
  http://www.faqs.org/rfcs/rfc4122.html

Other Thoughts on the topic:

  • data access is very expensive, using a UUID should be faster
  • UUID1 or UUID4 would be the types to consider
  • UUID1 is preferable as it would introduce some machine significance which should make the chances for a collision to be even more remote than for a UUID4 (random)
 Posted by at 10:45 am

Greedy Coin Changer

 Python  Comments Off on Greedy Coin Changer
Apr 262008
 

Noah Gift over on O’Reilly OnLamp Blog has an article on building a greedy coin changer. That is, given a value, say 71 cents, calculate the fewest coins needed to make the amount. He had listed a number of solutions, but I felt I could do it a bit more pythonic. 😉

#!/usr/bin/env python
"""implement a greedy coin changer, returning the
fewest coins to make the change requested."""
#coin_list can be expanded to include silver dollars 
# and 50 cent pieces by just expanding the coin list
# to [100,50,25,10,5,1] the reulting answer 
#structure will modify itself to reflect 

coin_list = [25,10,5,1]
change_requested = .71
remaining = change_requested * 100 
change_returned = []    #result structure

for coin in coin_list:
    num_coins,remaining =  divmod(remaining,coin)
    change_returned.append(int(num_coins))
    
print change_returned
print remaining

The benefits of this version, are no conditional logic is needed, the coin structure can be modified and the answer will modify itself accordingly.