🗻 James Van Dyne

✈️Trips 🗺️Maps ✏️️Blog 🔗️️Links 👉Now 🏃Runs
  • 🏡Home
  • ✈️Trips
  • 🗺️Maps
  • ✏️Blog
  • 🔗️Links
  • 👉Now
  • 🏃Runs
  • ✏️Articles
  • 📤️Replies
  • 💬Status
  • 🔖️️Bookmarks
  • 🗺Checkins
  • 📅The Week
  • 🖥Tech
  • 🌲Sustainability
  • 🏃Running
  • 🧠Thoughts
  • 🇯🇵Japan
  • 💡TIL
  • ⛰Tanzawa
  • 🏡Home
  • ✏️Articles
  • 📤️Replies
  • 💬Status
  • 🔖️️Bookmarks
  • 🗺Checkins
  • 📅The Week
  • 🖥Tech
  • 🌲Sustainability
  • 🏃Running
  • 🧠Thoughts
  • 🇯🇵Japan
  • 💡TIL
  • ⛰Tanzawa
  • Checkin to 泉の森

    泉の森 35.42063680251615 139.5134731287625
    Apr 11, 2020
    by James
    in Japan

    Social distancing in the local forest. Such a great little place to let the little one run about and have a snack surrounded by trees.



    🔗permalink
  • Checkin to 弥生台駅前公園

    弥生台駅前公園 35.43038512118084 139.5062232133423
    Mar 22, 2020
    by James
    in Yokohama, Kanagawa, Japan

    Lovely park weather and the blossoms are starting to blossom.



    🔗permalink
  • Checkin to Tully's Coffee

    Tully's Coffee 35.31103396326208 139.4872309830384
    Mar 20, 2020
    by James
    in Fujisawa, Kanagawa, Japan

    The best spot to drink a coffee and watch some trains in front of Enoshima station.



    🔗permalink
  • Checkin to Enoshima Beach (江ノ島ビーチ)

    Enoshima Beach (江ノ島ビーチ) 35.30816905752166 139.4810187792224
    Mar 20, 2020
    by James
    in Fujisawa, Kanagawa, Japan

    海ラブ



    🔗permalink
  • Checkin to Starbucks

    Starbucks 35.42861 139.506997
    Mar 01, 2020
    by James
    in Kanagawa, Japan

    Sakura donuts and an ice coffee while Leo sleeps.

    🔗permalink
  • Checkin to 戸塚税務署

    戸塚税務署 35.39927871436361 139.5408103580196
    Feb 17, 2020
    by James
    in Yokohama, Kanagawa, Japan

    Tax office is a zoo.

    🔗permalink
  • Checkin to Lien SANDWICHES CAFE 横浜店

    Lien SANDWICHES CAFE 横浜店 35.44658124168285 139.6409809816266
    Feb 15, 2020
    by James
    in Yokohama, Kanagawa, Japan

    American style club sandwich!



    🔗permalink
  • Handling Unclosed HTML tags with BeautifulSoup4

    Feb 08, 2020
    by James

    A side project of mine is to archive the air pollution data for the state of Texas from the Texas Commission on Environmental Quality (TCEQ). My archiver then tweets out via the @Kuukihouston when thresholds of certain compounds go above certain thresholds that have been deemed by the EPA to be a health risk.

    Recently I added support to automatically update the list of locations that it collects data from, rather than having a fixed list. Doing so is very straight forward: download the webpage, look for the <select> box that contains the sites, and scrape the value and text for each <option>.

    There was only only a single hiccup during development of this feature: the developers don’t close their option tags and instead rely on web browsers “to do the right thing”.

    That is their code looks like this:

            Oyster Creek [29]        Channelview [R]

    When it should look like this:

            Oyster Creek [29]        Channelview [R]

    Lucky web browsers excel in guessing and fixing incorrect html. But as I do not rely on a web browser to parse the html, I’m using BeautifulSoup. The BeaitfulSoup  html.parser closes the tags at the end of all of the options i.e. just before the </select> tag. What this does is when I try to get the text for the first option in the list, I get the text for the first option + every following option.

    The simple fix is to switch from the html.parser parser to the lxml parser, which will close the open <option> tags at the beginning of the next <option> tag, allowing me to get the text for each individual item.

    # Bad
    soup = BeautifulSoup(response.text, ‘html.parser')
    # Good
    soup = BeautifulSoup(response.text, 'lxml')

    🔗permalink
  • GraphQL Best Practices: Testing Resolvers

    Feb 05, 2020
    by James

    Getting started with GraphQL and Python most of the documentation is focused on the basics: basic queries, filtering using pre-built libraries and so forth. This is great for quick “Hello World” APIs, but there isn’t much discussion for best practices that discuss how to build out larger APIs, testing, or maintenance. Perhaps it’s just too early in the Python-GraphQL story for the best practices to have been fully established and documented.

    Introductory GraphQL examples online don’t really require much testing of the resolver specifically. This is because these examples  just return the results of a Django Queryset directly. For those types of fields executing a full query is usually enough. But how to you handle more complex resolvers, ones that have some processing?

    Accessing your resolver directly from a unit test is difficult and cumbersome. To properly test a resolver, you're going to need to  split the parts that warrant independent testing into their own functions / classes. Then once it's split, you can pass the required input for processing, and return the results.

    However passing or returning Graphene objects to your functions will make testing them difficult in much of the same way that calling your resolver outside of a GraphQL request is difficult: you can't access the attribute values directly - they must be resolved.

    blog = Blog(title="my title")
    assert blog.title === "my title" # fail

    Where Blog is a Graphene object, the above test will fail. As blog.title will not be a String as you'd think, but rather a graphene wrapper that will eventually return "my title" when passed through the GraphQL machine.

    There's two ways to work around this:


    1. Pass in `namedtuples`  that match attribute for attribute your Graphene objects in their place. This is will become a maintenance headache as each time your object changes, you'll need to also update your named tuples to match.

    2. Pass/return primitive values into your functions and box them into GraphQL objects in your resolver directly before return.

    I've done both in my code and think that the second method is a best practice when writing GraphQL apis.

    By passing the values from graphene as primitives to your processing functions your code is no longer tied directly to graphene directly. You can change frameworks and re-use the same code.

    It's also easier to write tests as you pass in common objects like dict and int which are quite easy to assert equality and simpler to reason about.

    Takeaways:


    1. Break your logic out of your resolver and into specialized functions / classes.

    2. Pass/return primitive or other such values from these functions to maximize reuse.

    🔗permalink
  • I Didn't Know iCloud Photo Was a Thing

    Feb 01, 2020
    by James

    When sharing photos at work most of my co-workers would simply post a link to Google Photos in our company Slack. As an iCloud user, I thought my photos were only visible on my Mac or iPhone - machines logged in to my Apple account and setup to sync photos. So if I wanted to share photos with co-workers on Slack, I had to either upload them to Flickr or upload them directly into Slack. I always just uploaded them into Slack.

    I just realized today that I can view all of my photos from the web via iCloud Photos. What’s more is I can share photos with a URL just like my co-workers have been with Google Photos. The shared link also expires after 1 month, which is a nice additional security / privacy feature.

    Knowing that I can access my photos outside of Apple devices eases my mind. While I can’t ever see myself switching to Android from iOS, I could see myself using a Thinkpad + Linux for my desktop computing needs.

    🔗permalink
Previous 184 of 358 Next
Reply by email
Powered by
🏔Tanzawa

← An IndieWeb Webring 🕸💍→
Photo of James Van Dyne James Van Dyne Japan

Web developer living in Japan.