These are my notes on using some MicroPython specific tools in relation to a ESP32-DevKitC board.

These notes are for v2.1 of the esptool; an ESP8266 and ESP32 serial bootloader utility.

esptool has a number of functions, but I will only speak to those features required to identify the chip, get flash information and load the MicroPython firmware. See the docs for more information.

Installing esptool

(myproject) $pip install esptool  Confirm install (myproject)$ esptool.py version


Display chip information (chip_id)

(myproject) $esptool.py -p /dev/ttyUSB0 chip_id esptool.py v2.1 Connecting...... Detecting chip type... ESP32 Chip is ESP32D0WDQ6 (revision 1) Uploading stub... Running stub... Stub running... Chip ID: 0x7240ac40964 Hard resetting...  Display Flash memory information (flash_id) (myproject)$ esptool.py -p /dev/ttyUSB0 flash_id
esptool.py v2.1
Connecting....
Detecting chip type... ESP32
Chip is ESP32D0WDQ6 (revision 1)
Running stub...
Stub running...
Manufacturer: c8
Device: 4016
Detected flash size: 4MB
Hard resetting...


(myproject) $esptool.py -p /dev/ttyUSB0 read_mac esptool.py v2.1 Connecting.... Detecting chip type... ESP32 Chip is ESP32D0WDQ6 (revision 1) Uploading stub... Running stub... Stub running... MAC: 24:0a:c4:09:64:c8 Hard resetting...  Loading MicroPython Firmware You will need MicroPython firmware http://micropython.org/download#esp32 I download to a directory named images in my project folder. Since the ESP32 code is under development, I check out the GitHub commit page for the chip for any interesting new bits. When loading to a board that does not already have MicroPython loaded, you should erase the entire flash before flashing the MicroPython firmware. (myproject)$ esptool.py -p /dev/ttyUSB0 erase_flash
esptool.py v2.1
Connecting....
Detecting chip type... ESP32
Chip is ESP32D0WDQ6 (revision 1)
Running stub...
Stub running...
Erasing flash (this may take a while)...
Chip erase completed successfully in 5.0s
Hard resetting...


Now load the firmware with the write_flash command
The general form is:


esptool.py write_flash -p <port> -z <address> <filename>
-p                    specify the port
<port>                the port to use  i.e. /dev/ttyUSB0
-z   			Compress data in transfer (default unless --no-stub is
specified)
space

(myproject) $esptool.py -p /dev/ttyUSB0 write_flash -z 0x1000 images/esp32-20170916-v1.9.2-272-g0d183d7f.bin esptool.py v2.1 Connecting.... Detecting chip type... ESP32 Chip is ESP32D0WDQ6 (revision 1) Uploading stub... Running stub... Stub running... Configuring flash size... Auto-detected Flash size: 4MB Compressed 902704 bytes to 566927... Wrote 902704 bytes (566927 compressed) at 0x00001000 in 50.0 seconds (effective 144.4 kbit/s)... Hash of data verified. Leaving... Hard resetting...  Verify the firmware loaded correctly (myproject)$ miniterm.py --raw /dev/ttyUSB0 115200
--- Miniterm on /dev/ttyUSB0  115200,8,N,1 ---
--- Quit: Ctrl+] | Menu: Ctrl+T | Help: Ctrl+T followed by Ctrl+H ---

>>>


Now do a hard reset using the reset button on the board

>>> ets Jun  8 2016 00:22:57

rst:0x1 (POWERON_RESET),boot:0x13 (SPI_FAST_FLASH_BOOT)
ets_main.c 371
ets Jun  8 2016 00:22:57

rst:0x10 (RTCWDT_RTC_RESET),boot:0x13 (SPI_FAST_FLASH_BOOT)
configsip: 0, SPIWP:0xee
clk_drv:0x00,q_drv:0x00,d_drv:0x00,cs0_drv:0x00,hd_drv:0x00,wp_drv:0x00
mode:DIO, clock div:2
entry 0x4007a56c
I (982) cpu_start: Pro cpu up.
I (983) cpu_start: Single core mode
I (984) heap_init: Initializing. RAM available for dynamic allocation:
I (994) heap_init: At 3FFAE2A0 len 00001D60 (7 KiB): DRAM
I (1013) heap_init: At 3FFD4158 len 0000BEA8 (47 KiB): DRAM
I (1032) heap_init: At 3FFE0440 len 00003BC0 (14 KiB): D/IRAM
I (1052) heap_init: At 3FFE4350 len 0001BCB0 (111 KiB): D/IRAM
I (1072) heap_init: At 4008F3A8 len 00010C58 (67 KiB): IRAM
I (1091) cpu_start: Pro cpu start user code
I (1152) cpu_start: Starting scheduler on PRO CPU.
OSError: [Errno 2] ENOENT
MicroPython v1.9.2-272-g0d183d7f on 2017-09-16; ESP32 module with ESP32
>>>



You should verify that the firmware specified in the banner after the reset matches that firmware that you just loaded. In this case, v1.9.2-272-g0d183d7f

May the Zen of Python be with you.

These are my notes on using some MicroPython specific tools in relation to a ESP32-DevKitC board.

There are many tutorials and youtube videos that constantly encourage users to install tools and packages into their system-level libraries. (If you need to use sudo when you pip install foo, you are installing it as a system level library.) Please, Please, Please take the time to learn the basics of virtual environment. If you are a developer/hacker/maker – save yourself lots of frustration by using virtual environments.

A virtual environment is an isolated Python environment it contains all the necessary executables to use the packages that a Python project would need. It allows you use the desired version of a specific library and isolates that library from other virtual environments and the system and user level libraries. It allows you to easily define what packages are required to reproduce your work.

A short primer based on Python3.

To Create and setup a Python3 virtualenv

# nav to where you want to create your project then ...
$python3 -m venv your-project-name$ cd your-project-name
$source bin/activate (your-project-name)$ pip install --upgrade pip


Now just use pip and skip the sudo and kludging up your system level install.

To Deactivate a virtual environment

(your-project-name) $deactivate$


to activate an existing virtual environment

$source bin/activate (your-project-name)$


Now you can list the packages you have installed for THIS project with pip:

(your-project-name) $pip freeze pkg-resources==0.0.0 (your-project-name)$ pip list
pip (9.0.1)
pkg-resources (0.0.0)
setuptools (20.7.0)



May the Zen of Python be with you.

Every now and again, I get the bug to build something. Lately, I’ve been following MicroPython and the microcontrollers that it supports. The new hotness is the Expressif ESP32 chip. These are available from a number of different sources, many supplying a breakout board. Prices are all over the place from 20+ to 8+ depending on where you shop and how patient you are.

I went with the dev board from Expressif. I got a pair of them for about 15 each from Amazon. I like the trade off of delivery time, supplier and cost. You can see and order it here: 2 PACK Espressif ESP32 ESP32-DEVKITC inc ESP-WROOM-32 soldered dils CE FCC Rev 1 Silicon

$esptool.py -p /dev/ttyUSB0 flash_id esptool.py v2.1 Connecting..... Detecting chip type... ESP32 Chip is ESP32D0WDQ6 (revision 1) Uploading stub... Running stub... Stub running... Manufacturer: c8 Device: 4016 Detected flash size: 4MB Hard resetting...  With just a bit of searching, you’ll find that you need the latest Micropython for ESP32 and the esptool.py pip install esptool . Then after you connect your Board to your computer, you can load up the MicroPython firmware. esptool.py --chip esp32 --port /dev/ttyUSB0 write_flash -z 0x1000 images/esp32-20170916-v1.9.2-272-g0d183d7f.bin Now in the world of microcontrollers, blinking an LED is the “Hello World” program. However, the boards I purchased only had an LED that lit if the board was receiving power. No other LEDs on the board connected to a GPIO pin like some other breakout boards. It does have 2 switches, one of which, Switch 1(SW1) is connected to the GPIO0 pin. In the image, SW1 is the button on the top right, labeled boot. So I write some code to figure out the initial state of GPIO and then toggle the button a couple times. """sw1_1.py - look at initial state of GPIO0 and then record it toggling""" from machine import Pin def main(): # setup sw1p0 = Pin(0, Pin.IN) # switch sw1 connected to logical Pin0 state_changes = 0 # loop control prior_value = sw1p0.value() # sw1p0 previous state, initially unknown print("sw1p0 initial value is %s" % prior_value) # report initial state # main loop while state_changes < 4: # press, release, press, release new_value = sw1p0.value() # cache value, as inputs can change if new_value != prior_value: # has state changed? print('sw1p0 was %s is now %s' % (prior_value, new_value)) prior_value = new_value # update prior_value for next loop state_changes += 1 if __name__ == '__main__': main()  I did sort some of this out using the serial REPL, but for this post, I wrote up a script to demonstrate my findings. Using the adafruit ampy tool, we’ll run the code. pip install adafruit-ampy Note: you will need to press sw1 twice before you see anything after the ampy cmd. $ ampy -p /dev/ttyUSB0 run sw1_1.py
sw1p0 initial value is 1
sw1p0 was 1 is now 0
sw1p0 was 0 is now 1
sw1p0 was 1 is now 0
sw1p0 was 0 is now 1


As you can see from the results, the initial state of GPIO0 was high(or 1). When sw1 is pressed/closed it goes low(0) and goes back high(1) when it is released/open. If you look at the board schematic, in the Switch Button section, you’ll see that when sw1 is closed, it shorts out GPIO0 to ground. This would indicate that you were pulling it low from a high state. So our observations match the schematic.

If you look at the schematic, you will see a capacitor from R3 to Ground that is used to debounce the switch. You should assume that all mechanical switches bounce and that bouncing needs to be dealt with in either the circuit or code. Life is much easier if you debounce the circuit with hardware.

Conclusions:

1. Success! While we don’t have an onboard LED to blink, we can do something with the board without extraneous components, a Hello World app.
2. The app is very naive since it uses polling to monitor state changes and spins in a tight loop most of the time. Often the reason for using a microprocessor has a power element to it. Sitting and spinning would be counter to a goal of low power usage.
3. We covered a lot of ground in this article, skipping or very lightly going over how to load MicroPython and the other tools I used. There are lots of very good resources for them on the interwebs.
4. If you liked this article, and you want to get an ESP32 board, you can use the Amazon affiliate link above as an expression of your support.

In an upcoming article, I’ll rework the example to be more energy conscious by using an interrupt to signal the state change.

May the Zen of Python be with you!

We have a new Raspberry Pi 3 in the house and I’ve been having some fun getting it setup to edit Python with customized vim. You can see my dotfiles here.

So in a nutshell, I like using tmux + vim + plugins and customizations. The following are some snags I ran into and how to get them sorted.

To get tmux and vim with Python enabled for plugins:

sudo apt-get install tmux vim-nox


I use syncthing to keep my dotfiles in sync on the different systems I develop on.

cd ~/sync/common/config/dotfiles
./updot.sh

cd ~
vim +PluginInstall


Before trying to build YouCompleteMe you will need to increase your swapfile size otherwise you’ll die a strange an convoluted death.

sudo vim /etc/dphys-swapfile
#edit this line
#CONF_SWAPFILE=100
CONF_SWAPFILE=1000


Then reboot. switch to

.vim/bundle/YouCompleteMe

run

./install.py

I returned the CONF_SWAPFILE back afterwards.

sudo apt-get install python-virtualenv

To get the requests package to run without the Insecure Platform you’ll need to install some supporting libs

sudo apt-get install libffi-dev libssl-dev python2.7-dev

Finally, the default /etc/vim/vimrc has syntax highlighting commented out. So either uncomment it, or as in my case, update your .vimrc to enable highlighting. — First platform that I have had to do this on. It won’t hurt my other installs so I kept it in my own dotfiles.

It’s Sunday morning and I get some time to myself, so I’m listening to some blues and catching up on my newsfeeds when I come across this interesting article on calculating what size group of people would be necessary to have a 50/50 chance of two of them sharing the same birthday.

General birthday problem

The primary focus of the article is on this equation to calculate the probability of uniqueness given a sample size of r from a group of N things to choose.

$$p = \frac {N!} {N^r (N-r)!}$$

In the article, he is concerned with the possibility of overflow with the size of the factorials involved and since scipy doesn’t have log factorial he implemented his solution with gamma log — see above article for his code. So I say to myself, “Wonder if I can do this with just straight Python?”

A quick google on log factorial found this approximation of log factorial by Srinivasa Ramanujan on math.stackexchange If you have not heard of Ramanujan before — stop and google him immediately. Wow!

$$\log n! \approx n\log n-n+\frac{\log(n(1+4n(1+2n)))}{6}+\frac{\log(\pi)}{2}$$

Which works out to the following CPython code:

from math import log, pi, exp

def logn_factorial(n):
"""return an approximation of log n! using Ramanujan's equation."""
return n * log(n) - n + (log(n * (1 + 4*n * (1 + 2*n)))/6) + (log(pi)/2)

p = exp(logn_factorial(N) - logn_factorial(N - r) - r*log(N))


Ok, so no scipy needed to follow along with this article plus I get to use some very cool math. I like Sunday morning fun time. However, I then start thinking, “overflow”, hmmmm. What is the upper bound of math.factorial anyway. Since it’s the birthday problem lets see what happens with 365

>>> import math
>>> math.factorial(365)
25104128675558732292929443748812027705165520269876079766872595193901106138220937419666018009000254169376172314360982328660708071123369979853445367910653872383599704355532740937678091491429440864316046925074510134847025546014098005907965541041195496105311886173373435145517193282760847755882291690213539123479186274701519396808504940722607033001246328398800550487427999876690416973437861078185344667966871511049653888130136836199010529180056125844549488648617682915826347564148990984138067809999604687488146734837340699359838791124995957584538873616661533093253551256845056046388738129702951381151861413688922986510005440943943014699244112555755279140760492764253740250410391056421979003289600000000000000000000000000000000000000000000000000000000000000000000000000000000000000000


Ok, let’s dial it up by a factor of 1000.

>>> math.factorial(365000)



Well it took a bit but it ran, with over 40 screens of numbers. Python is still going strong, so what is the upper bound of math.factorial? A search brought me here http://bugs.python.org/issue8692 and specifically this message. Which means that the max size of the result can not exceed sys.maxsize - 1 digits, or on a 64bit platform, 2**63 – 1 digits of capability. Thanks to some dedicated individuals who seemed to be having as much fun as I was, math.factorial is up to the task.

Don’t underestimate the power of Python.

Try straight Python before you move on to something more complex. The approximations of $\log n!$ were unnecessary. All that was needed was just Python and we get the following implementation of the Probability of Uniqueness:

from math import factorial

p = factorial(N) / ((N**r) * factorial(N-r))


A simple and straightforward implementation of the equation.

* A quick note, I’m usually a stickler for good variable names, however, when working the math equations I stick as close as possible to the equation that I am implementing.

* Also, if you are working on big enough numbers and I mean huge numbers, then approximations might be needed but by then you are going to be working at the limits of what a 64bit platform can do.

Premature optimizations and all that, when was the last time you fell in to the trap of implementing something to handle a misconceived belief of a shortcoming in Python or one of the standard libs?

Since Python is call-by-object(*1), a function that mutates a mutable argument, changes that object in the caller’s scope. some code to illustrate:

>>> mobject = {'magic': True}
>>> id(mobject)
140330485577440
>>>
>>> def toggle(mdata):
...    '''flip the state of magic'''
...    mdata['magic'] = not mdata['magic']
...
>>>
>>> toggle(mobject)
>>> mobject
{'magic': False}
>>> id(mobject)
140330485577440


So hopefully this does not surprise you. If it does, please see the two links in the footnotes(*1) as they explain it quiet well.

My question deals with the implicit nature of the mutation. Not that Python is somehow wrong, it is the fact that the function usage does not convey the mutation to the reader as pointedly as I want. Coming from other languages that are call by value, a function that wanted to mutate an argument and get it back into the caller’s scope had to return the mutated value.

>>> def toggle_explicit(mdata):
...    '''flip the state of magic and return'''
...    mdata['magic'] = not mdata['magic']
...    return mdata
...
>>> mobject = toggle_explicit(mobject)
>>> mobject
{'magic': True}
>>> id(mobject)
140330485577440
>>>


Now I know that the above code is most definitely NOT call by value, but I do feel that it is more explicit about my intention. Even though the assignment is not needed for the actual effect. i.e.:

>>> toggle_explicit(mobject)
{'magic': False}
>>> mobject
{'magic': False}


So why does toggle_explicit() give me the warm fuzzies? Where as toggle() requires the reader to know what is going on. Is it just me shaking the cruft of call-by-value languages off? What do you do, when you mutate state within a function? Is the toggle_explicit() form not/less Pythonic?

— Footnotes —

(*1) Raymond Hettinger in this SO article references a post by Fredrik Lundh’s on “Call By Object

In a recent post Amit talked about temporary files and gave a number of scenarios where they can be quite handy. In testing, I occasionally need temporary files and prefer to use mkstemp however, the clean up of the file was bothersome and I found that I often needed to write something into the files before the test.

from dhp.test import tempfile_containing

contents = 'I will not buy this record, it is scratched.'
with tempfile_containing(contents) as fname:
do_something(fname)


Thus tempfile_containing was my solution. It uses mkstemp, writes contents to the file and is implemented as a context manager so it returns a path and file name that is cleaned up when it goes out of scope. (Note: NamedTemporary and others, return a file-like object tempfile_containing returns a path/filename of an existing file) After the file is written to it is closed so no inadvertent lock contentions occur on some OSs. If it sounds like something that would make writing tests faster for you, check it out at:

* documentation
* Source
* pip install dhp

Comments as defined by Python in 2.7.x and 3.4.x

A comment starts with a hash character (#) that is not part of a string literal, and ends at the end of the physical line. A comment signifies the end of the logical line unless the implicit line joining rules are invoked. Comments are ignored by the syntax; they are not tokens.

Seems pretty straight forward to me. Parser finds a comment token, and ignores everything to the new-line token. (Except for when implicit line joining rules are invoked.) So why was I struck with a hmmm, when I opened up a Python Console and entered a comment then pressed Enter?

>>> # this is a comment, guess what happens next?
...


If you look closely, the next line is prefixed with an ellipsis (…) and not a new prompt (>>>). Well there must be a reason, but this feels “unexpected.” So I pull up a 2.7 console and try it again.

>>> # this is a comment, guess what happens next?
...


So it looks to be intentional, although it doesn’t seem to feel correct. Ok, let’s give pypy a try and see what happens there.

>>>> # this is a comment, guess what happens next?
>>>>


Ok, now I am confused. CPython treats it as an unclosed statement of some kind, although you would think it would be closed because a new-line token should have been encountered thus closing off the comment. However, when pypy gave a different and less surprising result, I put a hand to my chin and said, “Hmmm.”

Ok interweb – does anyone know what is going one here?

1) Why the unexpected ellipsis in CPython ?
2) Why does pypy not return an ellipsis ?
3) Are they both correct and are just slightly different implementations of the same reference? Or is one more correct than the other?

While I’m scratching my head, you can ponder this:

>>> # comment
... 2 + 2
4
>>>


A search for information on string interpolation in Python will inevitably lead you to comments and links to old documentation that the string modulo operator is going to be deprecated and removed. However, that is just outright FUD. I need not make a case for the modulo operator, I’ll just let the code do the talking.

from timeit import timeit

def test_modulo():
'Don\'t %s, I\'m the %s.' % ('worry', 'Doctor')

def test_format_explicit():
'Don\'t {0}, I\'m the {1}.'.format('worry', 'Doctor')

def test_format_implicit():
'Don\'t {}, I\'m the {}.'.format('worry', 'Doctor')

timeit(stmt=test_modulo, number=1000000)
timeit(stmt=test_format_explicit, number=1000000)
timeit(stmt=test_format_implicit, number=1000000)


Running the code on python 3.4.3 I get the following results:

>>> timeit(stmt=test_modulo, number=1000000)
0.668234416982159
>>> timeit(stmt=test_format_explicit, number=1000000)
0.9450872899033129
>>> timeit(stmt=test_format_implicit, number=1000000)
0.8761067320592701


Note that test_format_explicit is the form most commonly found on the web. However, the implicit version is a much closer equivalent to test_modulo. In this case, there is an apparent price for being explicit.

Until .format is on par, speed-wise, with % there is no chance of it being deprecated. I support .format‘s existance, in some battle grounds it is superior. You shouldn’t bring regex‘s to the fight when .starstwith, .find, .endswith or in can handle the challenge cleanly. The same is True for .format and %.

An as PEP 461 demonstrates, the string modulo operator is not going quietly into the night.

This post was inspired by curiosity after reading this 2013 article.

I personally don’t like Python’s type annotations, the completely mask out what was a human friendly function signature. For something that was being proposed and skunk-worked in to stub files, according to Guido’s keynote at PyCon, it is spreading inside of source files, at an alarming rate, through out Python’s upper echelon.

Just this weekend Type Annotations infested a blog post on Why print is now a function in Python 3. Luckily Brett, took out the hedge trimmers and cleared away the mess. It is a interesting blog post now that you can see what he is talking about. Now I see that they’ve spread to PEP8.

The usefulness of Static Type Checking against Dynamic code is questionable at best. The damage that can and may be done to readable function signatures is frightening on scale similar to the Kudzu infestation and decimation of indigenous plants in the US. I encourage you to help fight this invasive and damaging trend by keeping your Type Annotations where they belong, in stub files.

Why doesn’t PEP8 encourage the use of stub files over obfuscating your code? A considerable number of people don’t like lint droppings ( # pylint:disable=… ) in their code and it goes in a comment, this trash is being put right in the function’s signature.

What is the practical upside to Static Type Checking? Guido talked about it in big terms and hand waving, in his keynote and people use glowing buzz words, but, seriously with examples, what can it actually do that writing testable code and tests can’t? The upside is as hard to see as a type annotated signature.

Last night I gave a presentation on Robots and Python at the Omaha Python User’s Group meeting.

I’ve decided to lend out my robot to other group members who are interested in the topic. I am going to document how to get the Robot set up and create an environment to interact with it. It had been a while since I last used the robot (python2.4 or so) and I had to do a few things to get things fixed up with the current version of python and the supporting packages. I got a lot of help from this article but I am going to condense that information to what needs to be done on a linux platform.

NOTE: These instructions are for python2.7, I’ve read that python3.x is problematic, although I’ve not tried.

## Software Environment

1. First setup a virtual environment
2. virtualenv robot

3. Now change to the directory and activate the virtual environment
4. cd robot; source bin/activate

5. Now install the dependencies
6. pip install numpy pyserial Pillow

7. Now checkout the latest myro source
8. svn co http://svn.cs.brynmawr.edu/Myro/trunk myro

9. change into the myro/myro subdirectory and edit the graphics.py file. You will need to change the line import ImageTk to from PIL import ImageTk
10. Now change back up one directory and run the setup for the myro library
11. python setup.py install

12. now go back one more directory so you are in robot

## Bluetooth setup

My laptop didn’t have Bluetooth built-in so I used a dongle. Do what you need to do and open up your Bluetooth manager, then turn on the robot (with the fluke board attached). Robot requires 6 AA batteries and will run fine with rechargeable batteries if you have them. Install them in bottom compartment. Install fluke board by mating it to the rs-232 connector on the top of the S2. Power switch is a black slider by the comm port.

Look in your bluetooth manager for a device that has IPRE in the string. Pair with it and use the following code. 1234 Make a note of the device that is set up. On my rig it was /dev/rfcomm0
NOTE: on mine the /dev/rfcomm0 was root:dialout but I couldn’t access it. I was too lazy to check my groups so I just pulled out the hammer and hit it with
sudo chmod 666 /dev/rfcomm0

Inside your virtualenv, fire up python and enter the following:

from myro import *
initialize('/dev/rfcomm0')


repeat the initialize command until your are successfully connected. You’ll hear the beeps once you connect. Now lets test it. Assuming we are starting where we left off from above. (If not, start the python interpreter up again and issue the import and initialize commands from above.)

forward(1, 1)


See the manual for a list of commands or just dir() and help(interestingCommand)

If you run into any issues please let me know. I am going to keep this post updated so as others borrow the robot they’ll have some up to date instructions to get them started. If you are a member of the Omaha Python Group and would like to arrange to borrow the robot, please contact me.

I’ll post some robot code in future posts.
Have Fun!

I like the idea of listing changes to my distribution in the long_desciption in setup.py. So a release a go, I started appending docs/changes.rst to my README.rst file that I am using for pypi. It was a simple doc with bulleted lists. The world was good.

    with open('README.rst') as h_rst:

with open('docs/changes.rst') as h_rst:


However, I grew unsatisified with my changes.rst file and wanted it to link, ala cross-references to the documentation, so when reading the docs a user could quickly go see the docs for that item. For example:

* added “preserve_mtime“ parameter to :meth:.put, optionally updates the remote file’s st_mtime to match the local file.

Sphinx liked it, I liked it.
However, python setup.py check --restructuredtext --strict gagged when it saw it, so would PyPi.

Harumph! I mutter to myself. I want it all, I want Don’t Repeat Yourself(DRY), I want my change log to display on PyPi, I want cross-references in my docs. However, cross-references don’t make sense in the long_description, what they link to isn’t there. I do not want to update changes in two different places, I am already vexed with making sure that just the one document is updated. After all, who likes writing docs more than writing code?

Well, how do I get the cross-references scrubbed out for the long_description? Here is what I coded up:

with open('README.rst') as h_rst:

with open('docs/changes.rst') as h_rst:
BUF = BUF.replace('', '$') # protect existing code markers for xref in [':meth:', ':attr:', ':class:', ':func:']: BUF = BUF.replace(xref, '') # remove xrefs BUF = BUF.replace('', '') # replace refs with code markers BUF = BUF.replace('$', '')        # restore existing code markers
LONG_DESCRIPTION += BUF


It is a Decorate, Scrub, Transform, Undecorate kind of pattern. Stripping out the :roles: tags left the single  markers. So, if I wanted to change those to , I have to hide all of the existing , scrub, transform, and then unhide the original . And so that is what I did.

I imagine a determined individual could create a regex to cover the entire breadth of sphinx directives and make a sphinx_to_pypi converter. But for me, my itch is scratched. Maybe it will help someone else too.

Huzzah!
Here is the resulting long_description on pypi and the changes doc on pysftp.rtfd.org

Who else is wrangling long_description from RestructuredText documentation? What are you doing?

0.1.3, 0.1.4, 0.1.5 – The Lost Versions

2014-05-05, I am updating this post to give the short answer: use python setup.py register to update your meta-data on pypi. It can be run repeatedly and will modify the meta-data on pypi for the distribution.

and now the original blog post…

During the first launch of YamJam, I went through a series of releases because of rendering issues of README.rst on pypi. WAT!, I say to myself and a few other choice words. I had just created 6 pages of rst, that compiled just fine for my sphinx generated documentation. I had cut and pasted the top portion of my index.rst with some text edits — what can be going on? I uploaded my package with twine – no errors. pypi was seeming to say, everything is peachy, then turning its back and mumbling FU. (Things should not fail silently.) I pulled up my shiny new package on pypi and was met with unseemly, unformatted text instead of a spiffy display.

I google for answers, I review other dists in pypi, pulling up their repos and reading through their readmes — Aha! I say to myself, I don’t see anyone else using an :alt: on their build status badge. I remove mine, go through the release procedures again, I upload and was slapped down yet again.

I grow angry and frustrated, I recheck my readme, I google for issues related. I find mention of “run the docutils command on it”, but no mention of what command. I review even more dists on pypi, seeing other broken readme renderings. I am unsure of what to do. I think — “It must be my .. code:: python, I see lots of :: , I rip out mine and replace with ::. I try again and FAIL.

I reread the rst, I put it through sphinx just by itself and see a warning about duplicate references. I had view <url1>_ for docs and view <url2>_ . One of those little edits, I mentioned earlier. You think, looks like an href, smells like an href — but no, it is different. That did it, 3 revisions later, I’m happy with the display of my readme on pypi.

It was a day later, while enhancing my release and CI scripts that I found the python setup.py check and python setup.py check --restructuredtext commands. I tested it locally, and sure enough, it would “warn” but not set an return code, so my release script couldn’t detect the failure. I figure, that messing up your pypi page, should equal FAIL not warn. Ok, so I’ll submit a patch to make it ERROR and set a non-Zero return code. I find the code repos and start reading through the code and discover an option I hadn’t seen. -s, strict. That will cause it to FAIL and set a non-Zero return code. So off to the CI script I go, I add the strict option and the test passes when it should have failed. If you don’t have docutils installed, and I didn’t on my CI, it just returns the same response as if it had passed. FACE PALM. pip install docutils

As it stands now, I can detect rst that will cause pypi to fail silently so I am good in that regard, and you should be to, now that you have read this.

Sidebar: why dosen’t setup.py upload and twine upload automatically run the checks supplied by setup.py in their strictest mode? Fail early, the cost is less

Which brings me to unnecessary binding. Why is the description on pypi so tightly and unnecessarily bound to a distribution release. Forcing a new release to fix render problems and typos? Pypi will let us upload packages but not let us edit the description in a web interface WITHOUT having to do an entire re-release? Use the readme as a starting point, let us edit without re-releasing.

Alex G, if you happen to read this — please make this possible on warehouse. Mr Gaynor, Tear Down this binding! Also, Give us download stats, with as much info as possible so we can weed out mirror requests.

Anyone who is thinking, “I’m going to rewrite pip because of X, DON’T” We have had too many installers, to many distutils and setuptools. setup.py is a cacophony of knobs, buttons, dials, many fighting each other. “There should be one– and preferably only one –obvious way to do it.” Unless it has to do with setup.py, then it should be as confusing as possible. There needs to be a pypi/setup.py BDFaW (Benevolent Dictator For a While). If the new solution doesn’t solve the current problem, or creates new problems, it is not a solution, it is just change for the sake of change. Eggs and wheels – harumph, I say.

I enjoy writing code in Python, I endure creating a release.

The world can and should be a better place.

Since I didn’t get to attend this year, again, I’ve been watching the PyCon videos (thanks pyvideo.org and PyCon!) Out of all the video’s, Carl Meyer’s talk on “Set your code free…”, struck a nerve.

I have a project, YamJam, that I have been using since 2009. The main idea is a framework that allows you to factor out sensitive data from your code before you upload to a 3rd party repos. I’ve got internal and external projects that have been using it for quite a while and it makes these refactorings a breeze. I have other open source projects that get a lot more attention but yet have a niche audience, so I am experimenting to see if it is the lack of documentation and being a conforming pypi dist that is limiting it’s appeal. Yamjam should be popular, because it can scratch an itch caused by Djano. That itch being, “What do you do with settings.py?” It’s got lots of sensitive data that shouldn’t be checked into to a repos but it also has a lot of code that should. It also makes deployment between dev, staging and production easy to do with a checkout.

To that end, I’ve been creating a proper and complete test suite (was doctests) using py.test and tox, Continuous Integration with drone.io, documentation via readthedocs.org and sphinx, spiffing up the setup and dist with the latest distutils and uploading with twine instead of setup.py upload.

Through this whole process, I’ve had a lot of new experiences that I am going to be blogging about in the upcoming weeks. Things I like, really like, things that are annoying and some things that are counter to my way of thinking. During this process I’ve also been filing bug reports when I’ve encountered them and sending out feedback for improvements along the way.

After moving my code from subversion on google code to mercurial on bitbucket (hg convert), I started looking for a CI service to use. Off the bat, I looked at Travis-ci, but unfortunately, travis is a github snob. If you are not hosting your code on github or mirroring off of github then travis-ci is not an option. Some google searching showed that pylint (a tool I like very much and use) moved from internal tools to bitbucket and http://drone.io/ . So off to drone I go.

http://drone.io/ took about 10 minutes from signing in via my bitbucket account to running my first integration. When I learned that drone.io allows you to view the settings for other open source projects build environments, I had python 27, 32, 33 and 34 tests running via tox less than an hour later. What higher praise could I give a service than saying from 0 to testing in 10 minutes? I really, really like drone.io and recommend that you check them out. Unlike Travis-ci, drone.io supports bitbucket, github and google code. Options, I like. It is the same reason I prefer bitbucket to github. Bitbucket supports mercurial and git. I use both dvcs systems. I prefer mercurial. I like that bitbucket allows me to make the choice. Choice makes me happy. Check out my setup on drone.

More to follow.

I’ve run in to this situation a few times and end up having to query my gBrain for the answer. When using json as a transport from python to html/javascript, I frequently end up needing to move date and time data. However, the builtin json module is not happy when you ask it to serialize a date/time object.

>>> json.dumps(datetime.datetime.now())
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python2.6/json/__init__.py", line 230, in dumps
return _default_encoder.encode(obj)
File "/usr/lib/python2.6/json/encoder.py", line 367, in encode
chunks = list(self.iterencode(o))
File "/usr/lib/python2.6/json/encoder.py", line 317, in _iterencode
for chunk in self._iterencode_default(o, markers):
File "/usr/lib/python2.6/json/encoder.py", line 323, in _iterencode_default
newobj = self.default(o)
File "/usr/lib/python2.6/json/encoder.py", line 344, in default
raise TypeError(repr(o) + " is not JSON serializable")
TypeError: datetime.datetime(2011, 12, 1, 0, 50, 53, 152215) is not JSON serializable


So this means we need to figure out a work around. The trick is to let the json module know what it should do with a date/time object while leaving the rest of it in place. So, no replacing the default handler.

What we need to do is subclass the JSONEncoder and override the default method

class JSONEncoder(json.JSONEncoder):
def default(self, obj):
if hasattr(obj, 'isoformat'): #handles both date and datetime objects
return obj.isoformat()
else:
return json.JSONEncoder.default(self, obj)


Using the hasattr and looking for ‘isoformat’ method will allow this to handle both date objects and datetime objects. So all that is left to demonstrate is how to put it together with the json.dumps method.

>>> json.dumps(datetime.datetime.now(), cls=JSONEncoder)
'"2011-12-01T00:58:34.479929"'


Ok, so now you have this ISO formatted string containing date information, how do you get that converted to a javascript date object once it has been transmitted across the abyss and now resides in a javscript callback function?

var d = new Date("2011-12-01T00:58:34.479929");


Happy Data Slinging!

So, I’m hacking on a Python for Android project, which is built over top of the SL4A project. I’m currently using the remote method of development where you fire up an interpreter and share it in public mode. Then you import android.py and instantiate an instance of Android with IP and Port information of your public server. You can then hack in your favorite editor on a laptop instead of using a thumb-board or other. It looks like this:

import android  # The SL4A android.py module should be on your sys.path.

ip = '192.168.x.xxxx'
port = 35766
droid = android.Android((ip, port))


What happens is the __getattr__ method of the Android object uses magic to change droid.getenvironment() into an RPC call to the public server and then return the result back as a named tuple. Nice. Being the nosey bugger that I tend to be, I modified the code to add a debug param to the __init__ method, that when set, would print out what was being sent out over RPC and then the raw tuple results. A snippet of the modification goes like this:

  def __getattr__(self, name):
def rpc_call(*args):
if self._debug:
print "droid.%s%s" % (name, str(args))
res = self._rpc(name, *args)
if self._debug:
print "\t%s" % str(res)
return res
return rpc_call


You can easily see where I put my “if self._debug” logic in place. Now if I use my modified android.py I can turn on the debug flag and get some 411 on the magic that is going on. It ends up looking like this:

droid.eventWait(3000,)
Result(id=28, result=None, error=None)
droid.eventWait(3000,)
Result(id=29, result={u'data': u'@end', u'name': u'dmt:fromClient.speak', u'time': 1322542655069000L}, error=None)
droid.wakeLockRelease()
Result(id=30, result=None, error=None)