OpenStreetMaps – an open-source Maps application

Dion Moult
, 19/06/2018 | Source: thinkMoult

Recently I’ve been interested in an initiative known as OpenStreetMaps. Launched in 2004, OpenStreetMaps is the open-source equivalent of Google Maps, and functions largely like how Wikipedia does (and in fact was inspired by Wikipedia) – it’s a map of the world drawn completely by volunteers and open-source enthusiasts.

OpenStreetMaps world map

You might’ve already seen OSM in action. Below it’s used by default in the privacy-friendly search engine DuckDuckGo, other wiki-based projects like WikiVoyage, and many games use it as a base layer, such as Pokemon Go.

PokemonGo uses OpenStreetMaps as a base layer

You’ve probably used Google Maps before and have it installed on your phone to help you drive to places with the GPS. You may have also played with Bing Maps which essentially does the same thing. At first glance OpenStreetMaps is purely a clone: you can zoom in and out, look at street names and see buildings, and have it tell you how to drive to a destination. It’s not that exciting, and isn’t worth talking about.

However if you were a user of OSM, occasionally you might notice areas of the map where volunteers have gone above and beyond to draw details of the environment that other maps will not. Things like individual driveways, articulated building outlines, kerbside grass, wheelchair accessible walkways and kerb ramps, and individual bush and tree locations, fences, and parking niches. Zooming in we can identify storm drains, streetlamps, water taps and park benches. This level of detail is possible because the map is created by people who are genuinely interested and express a love and care in their work. The example below is in Brisbane, Australia, largely by a fellow called ThorstenE.

OpenStreetMaps example in Brisbane, Australia

Where OSM really excels is as an open-data resource. Usually, you are only limited to raster map images produced by Google Maps and Bing, but aren’t allowed to access the underlying database of geographic and vector information. In contrast, because the data in OSM’s database is free for everyone, specialist maps can easily be created. Take for example the extensive mapping of skiing and snowboarding tracks in Oslo, Norway provided by OpenSnowMap

OpenSnowMaps

… alternatively there is the Whitewater rafting map in the UK …

Open Whitewater Rafting maps

… and the OpenCycleMap which maps the world’s bicycle routes, and shows the incredible culture of pedestrian and cycling friendly urban planning in the Netherlands.

OpenCycleMaps example

OSM also helps lead the way in humanitarian mapping. When a flood, fire, earthquake or other natural disaster occurs, existing maps provided by Google and Bing are no longer current. Mappers need to create new maps to allow disaster relief teams to coordinate their efforts, target houses for rescues efficiently, or to know what routes relief organisations can take to navigate the terrain. This work is done by the excellent Humanitarian OpenStreetMap Team. It also includes non natural disasters, such as mapping demographics and environmental issues related to poverty elimination, gender equality, refugee response strategies, public disease outbreaks, clean energy, and water and sanitation. As one current example, right this minute the Monsoon rains have caused severe flooding in the Kurunagala and Puttalam districts of Sri Lanka. A map is being prepared so that first respondents and aid agencies can deliver relief supplies. A grid of zones with their mapping progress is updated in real time below.

Humanitarian OpenStreetMaps Team map in Nepal

As an open-source creation, it doesn’t data-mine your activity so you can use it as a Maps application without privacy concerns, you can download the raw vector so you can use it offline on your phone, and has a conservative approach to licensing data that allow people who want to embed OSM technologies in their own creations in a much more flexible manner. If you feel strongly about supporting privacy-aware applications (especially after the Cambridge Analytica scandal), and encouraging communities that aren’t motivated by profit, OpenStreetMaps should be something to consider. There are over 1,000,000 mappers who have contributed to OSM, and you can become one of those too.

One of the most amazing things about OSM is that whereas mapping the world is an inherently complex process, it has managed to make it easy and fun and doable by anybody who knows how to draw a rectangle with their mouse. Most of the other open-source initiatives have a high learning curve and lots of technical prerequisites, but OSM is completely the opposite. Just zoom into your city on OSM.org and click the Edit button on the top left. It will give you a short tutorial that lets you draw new roads and buildings within minutes. The thought that has gone into the user-friendliness of this online map editor is absolutely incredible.

OpenStreetMaps iD web-based editing software

I’ll talk about OSM a bit more in upcoming posts, and share some of the more interesting technical sides of things.

The post OpenStreetMaps – an open-source Maps application appeared first on thinkMoult.

Deleting Facebook, and a reflection on digital privacy

Dion Moult
, 01/04/2018 | Source: thinkMoult

In the wake of the recent Cambridge Analytica privacy issue in the news, I have decided to #DeleteFacebook. The thinkMoult blog is still represented via the public Facebook thinkMoult page, but my private profile has been cleared out. Given that Facebook is increasingly sharing our profile data (as shown in the graph below produced from Facebook’s very own reports), clearing out the account makes a difference, albeit a small one. I also thought it would be good to share a few things I’ve learned about Facebook in the past couple of weeks, related to my new years resolution to improve digital security.

Facebook government requests over time

(Note: you can compare with Google’s data disclosure over time)

First, I’d like to commend Facebook’s behaviour so far. Being the world’s largest social network probably isn’t easy, and Facebook has made initiatives to increase its transparency. For instance, they issue a transparency report, and they use the Signal secure messaging protocol for a secure chat mode in FB Messenger. It is also possible to download your Facebook data, and place restrictions on data sharing with apps and advertisers. Their data retention policy also seems to suggest that if you delete data from your account, it’s also gone from their servers.

However, of course, this isn’t the complete picture. Take for instance the world map of Facebook government requests in the first half of 2017 from their very own transparency report.

Facebook government requests in 2017

The map (split into Jenks natural breaks) shows that US government requests are miles ahead of the rest of the world in asking Facebook for information. Most governments from other countries don’t play any part in this.

However, the map is incomplete. It is also not possible to see data shared through indirect means. Developers can easily create apps that integrate with Facebook. Whether you answer a survey through Facebook or use Facebook to log into another service, they can have varying degrees of access to your profile and friend information. This may also occur without your explicit consent. For instance, my meager Facebook usage has resulted in my details being shared with 138 companies. This is not to mention that Facebook trackers are on 25% of websites online. Oh, and let’s just forget Facebook altogether: Google trackers are on 75% of websites online (and yes, also on my blog). Basically, you are always tracked online, from the way you move your mouse to how you feel, which can be combined through machine learning to indirectly define character profiles, interests, and demographics.

Like most technologies, this data can be used for very positive things and very negative things alike. The negative side comes when services we assume are private social platforms are actually not. This data may be used to influence political elections, or help China rank all citizens, or rebrand political news as fake news in Malaysia, or even be accessed by any law enforcement agency around the world without notification or warrant – it doesn’t matter – people misunderstand that posting on Facebook is not a private matter: it is public.

Deleting Facebook is one step of many to promote the idea that just as there are public outlets for expression online (blogs, Twitter, Facebook) there equally are private outlets (Signal, Tor, ProtonMail). Of course, there is nothing inherently wrong with either outlet, but we should recognise these differences in privacy and know when to choose between them.

For more reading, see why digital rights matters, even though you don’t think it impacts you, and how you can improve human rights by changing your messaging app.

The post Deleting Facebook, and a reflection on digital privacy appeared first on thinkMoult.

How to download the Australian BioNet Database

Dion Moult
, 27/03/2018 | Source: thinkMoult

Did you know that there is a nest of endangered long nosed bandicoots living just beside the popular Manly beach in Sydney, Australia? Well, I didn’t, until I looked at BioNet. The Australian NSW government created BioNet as a government database of all flora and fauna species sightings in NSW. It’s absolutely fantastic. If you’re an architect and want to see how you might impact the urban ecosystem in NSW, look at BioNet. If you’re an ecologist of some kind, you probably already use it. If you’re just a good citizen who wants to remodel your back yard to improve urban ecology, BioNet is there for you.

Fortunately, BioNet comes with an online search system called Atlas. It’s simple to use, but unfortunately it has limits on the data it produces. It won’t show you all the fields associated with species, won’t show meta fields, and has a limit to the quantity of records shown. Thankfully, BioNet comes with an API which can be queried with programming knowledge. I’ve written a bit of Python which will allow you to download regions of data; but before we get to that, let’s see a graphic!

Sydney BioNet species map

I’ve plotted every species on the database close to Sydney in the map above. Size is relative to the number of species sighted (logarithmic relationship). I haven’t done any real filtering beyond this, so it’s not very meaningful, but it shows the data and shows it can be geolocated. It also looks like someone murdered the country, but I’ll post the interesting visualisations in a future post.

The Python code works in two parts. The first queries the API for json results divided into square tiles from a top left and bottom right latitude and longitude coordinate region. This’ll give you a bunch of *.json files in the current working directory. Edit the coordinates and resolution as necessary, and off you go. I’ve put in a series of fields that should be good for more general uses, but you can check the BioNet Data API for all fields.

import os

start = (-33.408554, 150.326152)
end = (-34.207799, 151.408916)

lat = start[0]
lon = start[1]

def create_url(lat, lon, lat_next, lon_next):
    return 'https://data.bionet.nsw.gov.au/biosvcapp/odata/SpeciesSightings_CoreData?$select=kingdom,catalogNumber,basisOfRecord,dcterms_bibliographicCitation,dataGeneralizations,informationWithheld,dcterms_modified,dcterms_available,dcterms_rightsHolder,IBRASubregion,scientificName,vernacularName,countryConservation,stateConservation,protectedInNSW,sensitivityClass,eventDate,individualCount,observationType,status,coordinateUncertaintyInMeters,decimalLatitude,decimalLongitude,geodeticDatum&$filter=((decimalLongitude ge ' + str(lon) + ') and (decimalLongitude le ' + str(lon_next) + ')) and ((decimalLatitude le ' + str(lat) + ') and (decimalLatitude ge ' + str(lat_next) + '))'

i = 0
resolution = 0.05

while (lat > end[0]):
    while (lon < end[1]):
        lat_next = round(lat - resolution, 6)
        lon_next = round(lon + resolution, 6)
        url = create_url(lat, lon, lat_next, lon_next).replace(' ', '%20').replace('\'', '%27')
        os.system('curl \'' + url + "\' -H 'Host: data.bionet.nsw.gov.au' -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Firefox/52.0' -H 'Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8' -H 'Accept-Language: en-US,en;q=0.5' --compressed -H 'Cookie: NSC_EBUB_CJPOFU_443_mcwjq=ffffffff8efb154f45525d5f4f58455e445a4a423660' -H 'DNT: 1' -H 'Connection: keep-alive' -H 'Upgrade-Insecure-Requests: 1' -H 'Cache-Control: max-age=0' > " + str(i) + '.json')
        i += 1

        lon = round(lon + resolution, 6)
    lon = start[1]
    lat = round(lat - resolution, 6)

Now we’ll run another little script which will convert all the json files in the directory into a single csv file. You can read this csv file in programs like Excel or QGIS for further analysis.

import unicodecsv as csv
import json

f = csv.writer(open('bionet.csv', 'wb+'), encoding='utf-8')
number_of_json_files = 352

f.writerow([
    'IBRASubregion',
    'basisOfRecord',
    'catalogNumber',
    'coordinateUncertaintyInMeters',
    'countryConservation',
    'dataGeneralizations',
    'dcterms_available',
    'dcterms_bibliographicCitation',
    'dcterms_modified',
    'dcterms_rightsHolder',
    'decimalLatitude',
    'decimalLongitude',
    'eventDate',
    'geodeticDatum',
    'individualCount',
    'informationWithheld',
    'observationType',
    'protectedInNSW',
    'scientificName',
    'sensitivityClass',
    'stateConservation',
    'status',
    'kingdom',
    'vernacularName',
    ])
i = 0
while i < number_of_json_files:
    data = json.load(open(str(i) + '.json'))
    print(i)
    for x in data['value']:
        f.writerow([
            x['IBRASubregion'],
            x['basisOfRecord'],
            x['catalogNumber'],
            x['coordinateUncertaintyInMeters'],
            x['countryConservation'],
            x['dataGeneralizations'],
            x['dcterms_available'],
            x['dcterms_bibliographicCitation'],
            x['dcterms_modified'],
            x['dcterms_rightsHolder'],
            x['decimalLatitude'],
            x['decimalLongitude'],
            x['eventDate'],
            x['geodeticDatum'],
            x['individualCount'],
            x['informationWithheld'],
            x['observationType'],
            x['protectedInNSW'],
            x['scientificName'],
            x['sensitivityClass'],
            x['stateConservation'],
            x['status'],
            x['kingdom'],
            x['vernacularName'],
            ])
    i += 1

That’s it! Have fun and don’t forget to check for frogs in your backyards. If you don’t have any, build a pond. Or at least a water bath for the birds.

The post How to download the Australian BioNet Database appeared first on thinkMoult.

Improving human rights through secure messaging

Dion Moult
, 18/03/2018 | Source: thinkMoult

Earlier this year, I talked about how important digital privacy is (even if you don’t think it is). I talked about political oppression, and how raising the awareness of basic digital privacy largely benefits those who are politically oppressed. Using secure services increases the amount of infrastructure dedicated towards them and raises the standards of digital security worldwide. But before we talk about how we make the first steps, let’s remind ourselves why this is so important.

2018 Freedom in the World index map

The map above shows the results of the 2018 Freedom in the World index, derived largely from the Universal Declaration of Human Rights. At the time, it was signed without competition by all UN member states. Green is free, yellow is partly free, and red is not free. As of 2018, more than half the countries in the world have issues.

Percentage of countries in the freedom in the world index over time

There are quite a few ways to slice and dice freedom index data, but the general trend since the 1970s can be seen in the graph above – showing the distribution of free countries over time. Generally, since the 1970s, we’ve improved a bit, but largely stalled in the past 20 years. More than half the world still seems to have some problems in regard to political and civil liberties, and a few are getting worse. Of course, the above data is a gross simplification, so if you’re interested in seeing more detailed and granular metrics I highly urge you to check other dimensions such as Our World In Data’s Human Rights graphs.

The good news is that we all use the internet, and by using it we shape how it grows, and that allows us to make an impact on human rights. The World Economic Forum illustrates the link between digital privacy and human rights in the quote below:

Digital rights are basically human rights in the internet era. The rights to online privacy and freedom of expression, for example, are really extensions of the equal and inalienable rights laid out in the United Nation’s Universal Declaration of Human Rights.

As a case study, Facebook bi-anually releases a report called the Global Government Requests report (see the 2017 Global Government Requests Report blog post). In the first half of 2017, it shows that there were roughly 79,000 government requests for data for 115,000 user accounts. That’s more than double what it was three years ago (35,000 user accounts). Every report sees an increase in the number of requests, easily growing more than 30% each year. Yikes! That’s some serious compounding privacy interest!

However, there are steps we can take to raise the basic levels of digital privacy online. By adopting these technologies, we increase the global average cost per capita of digital mass surveillance — and reduce its efficacy as a tool to control and oppress those in need.

Our online activity can largely be grouped into three categories, messaging, email, and web browsing. By changing a few habits in our day-to-day online activities, we can make a difference. In this article, we’ll concentrate on messaging.

We send messages all the time – SMSes, through Facebook Messenger, WhatsApp, Skype, Google Hangouts and so on. If you’re the statistically average user, you have 2 messaging apps, and they’re both on the chart below. The data comes from Statista, and I’ve rehashed it slightly.

Global monthly active users for different messaging apps

What you may not know is that big data on the internet is owned by a handful of companies, governed by a handful of countries. USA’s Facebook and China’s TenCent gathers more of your messages than probably everything else combined. These companies have little to no incentive to protect your data, actively create digital profiles of you, and are based in countries that have governments that are more than happy to ask for it to be disclosed. .

But don’t listen to me, listen to Amnesty International’s Encryption and Human Rights Report instead. Unless you’re using Facebook’s WhatsApp (which is the least bad), Amnesty International thinks you deserve a slap on the wrist. Worst of all messaging apps is China-based TenCent’s QQ and WeChat, which scores a 0 out of 100 in protecting human rights. It has no encryption specification, does not recognise threats to human rights, made no commitment to freedom of expression, actively detects and censors content, and does not refuse backdoor implementations. So, if you send money through WeChat (yes, WeChat has higher transaction volumes than PayPal), guess what? It’s public! We could go through the many examples of public data but I’ll let you read the publication yourself and judge.

So what makes Facebook’s WhatsApp the least bad? Well, for a start it has publicly stated there is no encryption backdoor – no built-in mechanism for sharing your data. It’s more transparent and tries to notify you if your data is being requested, and produces bi-annual reports that we saw above. But perhaps the most effective secret sauce — the gold-standard of digital humans rights protection — is that it supports end-to-end encryption. This means that the moment your message leaves your device, nothing can read it.

WhatsApp’s end-to-end encryption isn’t it’s own invention. Like any robust cryptography standard, it is based off free and open-source software. Many years ago, defectors from Twitter started a collaborative effort called Open Whisper Systems and developed the Signal secure messaging system. Signal is not owned by any company or country, is open-source, and primarily funded by the Freedom of the Press foundation. For instance, if you want to tip off The Guardian, Signal is one of your options.

Signal logo

However despite WhatsApp’s best intentions in using the Signal system under the hood, its nature as a Facebook acquisition, organizational structure and some of its other technical decisions means that WhatsApp falls short of Signal’s encryption standards. In short, WhatsApp retains metadata about your contacts and messages, which may be used to infer information about you (much more than you might think!). Luckily, the small core team that built the Signal system also have their own app, which is completely privacy focused. It looks just like any other messaging app out there, and anyone can use it if they truly want to get top-notch security and privacy. Here’s a screenshot of it from the official Signal website. If you have an iPhone or Android, you can download it from the app store for free. It works on your computer with a computer app, and also works as a Signal command line app if you’re a terminal junkie.

Signal messenger app screenshot

In fact, the core Signal app is such an ideal state of privacy in the messaging world that apart from earning a special mention in the Amnesty International report, it also earned a 50 million USD investment from the co-founder of WhatsApp. Brian Acton, the co-founder of WhatsApp, was around when WhatsApp made the initial jump to use Signal as its system under the hood, and after he left Facebook and WhatsApp, donated to create the Signal Foundation – a non profit organisation to protect data privacy, transparency, and open-source development, which aligns with Acton’s personal beliefs.

If two people want a private conversation, electronic or not, they should be allowed to have it. – Brian Acton, WhatsApp co-founder

There’s still so much to talk about, but let’s stop here. I highly recommend that even if you do not fully understand the technical background behind encryption or the full extent of the humans rights impact, to take the first step and install Signal.

See you on the other side!

The post Improving human rights through secure messaging appeared first on thinkMoult.

A history of rendering engines and modern trends

Dion Moult
, 13/03/2018 | Source: thinkMoult

When working in the architecture industry, we often look at rendering engines from many angles, if you’d pardon the pun. We use simple renderings for diagrams, realistic rendering for architectural visualisations, real-time rendering for virtual reality prototyping, point cloud renderings for landscape and heritage scans, and so on. As the penultimate goal in archviz is to both represent abstractly and lower the costs of realistic prototyping, it pays to see what the future of rendering holds for the industry. But first, I’d like to briefly look at rendering’s history.

When the CG industry first considered how to render an image on the scene, they were mostly concerned with the hidden surface determination problem. In short, when you have a polygon, which surfaces are visible from the camera’s POV, and which are not. This mentality of thinking about how to “colour in an object” as opposed to how to simulate light lead to the development of one of the first rendering techniques: flat shading.

In flat shading, the rendering engine would consider the surface normal of each surface with respect to a light source. The more face-on each surface was, the lighter it was, and the more incident the surface was, the darker it was. If the path between the surface normal and a light source intersected with another surface (i.e., was blocked), it was shaded black. I’ve attached an example of flat shading in the first image below. This is roughly analogous to an actual physical phenomenon – that the angle of incidence to a material matters.

This was very simple, and also not very realistic. Flat shading was then combined with specular shading, which was essentially the same but heavily biased the angle of the surface normal, and had another parameter to control the falloff of the highlight. Although this created a convincing metallic glint (see monkey two in the image below), it was again just an artistic trick and wasn’t based off an actual physical phenomenon. Nevertheless, it stuck, even to this day.

Shading techniques improved when a Vietnamese gentleman invented the infamous Phong shader. He had the idea of interpolating the vertex normals between vertices to give a gradient of colour through the face. This created much more realistic results (see monkey three), but again, had no real world equivalent.

The next improvement to the shading model was when people observed completely black shadows. In real life, global illumination and ambient light ray bounces mean that almost everything can be very effectively indirectly lit. There was no computationally efficient solution to the problem at the time, and so an ambient light constant was added to simply bump up the global lighting (see monkey four). This sort of formed the segway into modern day rendering, and thus ends our history lesson.

Flat, phone, interpolated, and ambient light shading

The moral of the story is that almost all the shading approaches had no real-life equivalent, and all the subsequent improvements were based upon a method that considered how to colour in a shape from the point of view of the shape itself. This is fundamentally incorrect – in the physical world, how an object looks (at people scales, forget quantum mechanic scales) depends on rays of light that are emitted from objects that are giving off photons (e.g. hot objects) bouncing around and losing energy. Energy is deposited and is reflected upon materials in very different ways depending on the microsurface imperfections of the material, and the chemical properties of a material.

Luckily, in parallel as these artistic shaders were being developed, physically-based “ray-tracing” rendering engines were also being developed. These ray-tracers traced rays of photons from and to cameras and light sources in the same way that the real world worked. Back then, they were cool technical demos, but always were too inefficient for any practical work. However, theoretically we had proven that if you throw enough computing power at the problem, you can get photo-realistic results. Nowadays, of course, everybody knows about ray-tracing and it’s practically the norm in the market. I’ve shown an example of a chrome Monkey below reflecting the environment – the textbook example of what ray-tracing can achieve that traditional shaders could not (well, not without hacks and light maps and what not). You can see another example of photo-realistic rendering with Cycles that I’ve done too.

Glossy ray tracing render

Almost every single popular rendering engine nowadays, such as Blender Cycles, V-Ray, Maxwell, Renderman, and Arnold are ray-tracers. They are getting faster and now combining both GPU and CPU to provide almost real-time rendering. In recent years, Physically Based Rendering, better real world scanners, and improvements on texture painters are three among many advances that make photo-realistic rendering easier and easier.

Basically, photo-realism is becoming really easy. An interesting subtle trend to additionally note is that we are actually getting more scientifically based. In the past, these ray-tracers, although somewhat physically based, had many approximations to the point that real-world units were ignored in favour of arbitrary values.

The reason why this is important is because penultimate photorealism comes from scanning in real-world data at increasing levels of fidelity. Engines, no matter how physically based they are, will find it hard to use this information if they are unable to be easily linked back to physical units and measurable scientific values.

Thankfully, this is actually improving. Simple things like using more IES profiles in lighting, or falsecolour luminance images are starting to be possible with mainstream renders. The popularisation of the Disney shader is slowly getting engines working on interoperability, and the ultimate interoperability, much like penultimate photorealism, depends on scientific values.

At the very least, we know that if we throw more computers at the problem it will eventually converge and leave us with a beautiful real image.

This is great news for architecture – the industry I’m in. Architecture is no stranger to smoke and mirrors when it comes to renders and a trend towards scientific rendering makes it easier to both cheaply prototype and still promise the same results to eager clients.

Until then, let’s play with photoreal game engines and VR while the hype lasts.

The post A history of rendering engines and modern trends appeared first on thinkMoult.

Clean meshes automatically in Blender with Python

Dion Moult
, 05/03/2018 | Source: thinkMoult

I wrote a little Python script to clean up imported meshes (OBJs, DXFs, etc) in Blender. It’s quite useful if you often process meshes from other sources, in particular IFCs. Even better is that Blender can be run heedlessly and invoke the script automatically so you can clean meshes server side even before you open it up on my computer.

From my initial script, Paul Spooner at the BlenderArtists forums was kind enough to rewrite it with improvements. For the record, here it is. Simply copy and paste into the text editor and hit the Run Script button. It will only impact selected objects.

import bpy
checked = set()
selected_objects = bpy.context.selected_objects
for selected_object in selected_objects:
    if selected_object.type != 'MESH':
        continue
    meshdata = selected_object.data
    if meshdata in checked:
        continue
    else:
        checked.add(meshdata)
    bpy.context.scene.objects.active = selected_object
    bpy.ops.object.editmode_toggle()
    bpy.ops.mesh.select_all(action='SELECT')
    bpy.ops.mesh.remove_doubles()
    bpy.ops.mesh.tris_convert_to_quads()
    bpy.ops.mesh.normals_make_consistent()
    bpy.ops.object.editmode_toggle()

Although it is pretty self explanatory, what it does is weld vertices, convert tris to quads, and recalculate normals.

The post Clean meshes automatically in Blender with Python appeared first on thinkMoult.

Australian electrical infrastructure maps

Dion Moult
, 25/02/2018 | Source: thinkMoult

Today we’re going to take a brief look at Australia’s electrical infrastructure. The dataset is available publicly from Geoscience Australia, but for those who don’t dabble in GIS software it can be a little hard to get to. I’ve put it all together in QGIS, and here’s a few snapshots for the curious. Now you can pretend you’re an expert electrical engineer and say judgemental things about Australia!

These maps cover major national powers stations, electricity transmission lines, and substations. If you’ve ever wondered where your electricity comes from, or how it gets to your house, this may give you a brief idea of how it all fits together.

Let’s start with Australia as a whole. Translucent circles represent electrical power stations, and their size is weighted by the generation capacity. For convenience, any power station that has a generation capacity of greater than 250MW is labeled. Any non-renewable source is shaded black, and renewables are shaded in red. Transmission power lines are in red, and substations are the small red dots. Transmission lines are weighted based on their voltage capacity – thicker means more kV, thinner means less. Dotted transmission lines occur underground and the rest are overhead.

You can click any map for a high resolution version.

Australian electrical infrastructure map

Detailed analysis aside, we can see that Australia is still largely based on non-renewables. This is unsurprising. Similarly unsurprising is the south-east coast concentration, and proximity to densely populated areas. Tasmania is mostly devoid of non-renewables, which is great, but what’s that large red circle in the south east? Let’s take a look in more detail.

NSW electrical infrastructure map

Zooming into NSW, we can capture Talbingo Dam, which services the Tumut hydroelectric power station. Tumut is special as it is the highest-capacity renewable power station, and according to the list of major power stations by the Department of Resources & Energy, it has a capacity of 2,465MW. Put into context, this is just under the 2800MW capacity of the Bayswater coal plant, the second largest non-renewable power plant in NSW.

All this talk about capacity is really important because most renewable power stations have a capacity of less than 100MW. So you would have to build say 20-40 renewable power stations to equal the capacity of a single coal plant. If you excluded Tumut and Murray (the next high capacity hydro after Tumut), and added up every single renewable power plant in NSW (wind, solar, hydro, geothermal, biofuel, and biogas), you would only then equal the capacity of your average NSW coal plant. Snowy Hydro, which runs the show, are damn successful, and the secret is in the name: snow makes for good hydro! All that melting and sudden runoff is great for electricity.

Sydney electrical infrastructure map

Zooming further into the Sydney region shows coal around the perimeter, as well as the local contender which is the Warragamba dam hydro. Despite the promise of the Warragamba dam hydro, it is important to note that it is disconnected from the grid and only provides power when the dam is at a certain level. This is quite a rare occasion for Warragamba, which provides 80% of the potable water of Sydney. On my recent visit to Warragamba, I was actually told that the hydro is being shut down due to high operating costs. Simply put, the dam is better as a reservoir instead of a hydro source.

Sydney electrical infrastructure map zoomed in

Let’s take a closer look at the Sydney region. We see a spatter of renewables and non-renewables. Still, the non-renewables outweigh the renewables – we’ll take a closer look at insolation and local solar capacity in a future blog post, but right now the only renewables of note are the biogases. In short, these stem off landfills (Eastern Creek, Lucas Heights, and Spring Farm) and industrial wastelands (Camellia). Also interesting to note is that just like the Warragamba dam, all of these landfills are already shut down or close to shutting down. We’ll talk about the waste issue and landfill capacities in a future blog post too.

In summary, Australia has a little bit more work to do. Of course, the issue is a lot deeper than these maps, but we can’t cram everything into one blog post, so hopefully it’s enough to whet your appetite.

The post Australian electrical infrastructure maps appeared first on thinkMoult.

Show who modified an element last in Revit

Dion Moult
, 18/02/2018 | Source: thinkMoult

In Revit, don’t you ever wish you could find out who was guilty of screwing over your completely pristine BIM model? In software, we run special tracking software so that we can monitor the history of every single line of code, and blame whoever messed it up (literally, the program is called git blame). Although in the architecture industry we don’t quite have this same fidelity of tracking (well, sort of, more on that later), it’s still possible to find out who touched any Revit element last so we can interrogate them.

Finding out who last modified an element or created an element is actually a native Revit feature, but it is not very exposed on the user interface. First, I’ll show you how to check it via the interface, and then I’ll show you how to create a macro to check it from any view. I’ll then also show you how to check the history of less obvious Revit elements, like who last modified the view template.

To do this, we are assuming there is a central Revit file and people are checking out local copies of it. We are also assuming that everybody has different Revit usernames. You can check your Revit username by going to Menu->Options->General->Username.

Revit username option

Then, turn on a worksharing mode. Any of the four available modes have this feature, so pick any that you’d like.

Revit worksharing display mode options

Once the mode is enabled, just hover over any element in your view, and a Revit tooltip will appear showing various information about who created it, who owns it, and who touched it last. I’ve censored it so you can’t see who’s guilty.

Revit last updated tooltip

This is great and really easy. However to make things even easier I’ve written a macro that will allow you to click on any element without haven’t to first switch display modes, and then it’ll tell you who touched it last.

Go into the Manage tab and click on Macro Manager. Create a new Module in Python, and dump the following code:

def Blame(self):
    select = self.Application.ActiveUIDocument.Selection
    el = self.Application.ActiveUIDocument.Document.GetElement(select.PickObject(ObjectType.Element, 'Get element'))
    info = WorksharingUtils.GetWorksharingTooltipInfo(self.Application.ActiveUIDocument.Document, el.Id)
    TaskDialog.Show('Blame', 'Created by: ' + str(info.Creator) + '\nLast changed by: ' + str(info.LastChangedBy))

Press F8 to compile the macro, then run it in the macro manager. After clicking any element, you’ll see a dialog box pop up. I like to assign a keyboard shortcut to the macro manager to make this very quick to do.

If you feel the need to see the history of another less obvious / clickable element (say, a view template), you will need to first get its element ID. This is an integer that all elements in Revit have (note: it is not the GUID, which is a related but different thing). Using tools that allow you to query or browse the BIM database such as plugins provided by Ideate allow you to find out these element IDs.

Once you have the element ID, you can substitute the element acquisition line in the code above with the below snippet, where XXXXXXXX is your element ID:

el = self.Application.ActiveUIDocument.Document.GetElement(ElementId(XXXXXXXX))

There you have it – it’s all fun and games until you realise that half the screw-ups are your own fault :)

The post Show who modified an element last in Revit appeared first on thinkMoult.

Digital privacy is important, even though you think it doesn’t impact you

Dion Moult
, 12/02/2018 | Source: thinkMoult

The average person (or business entity) publicly shares their personal information on the internet. If you search with Google, send email with Gmail, talk with Facebook Messenger, and browse the Web with Chrome, you are being tracked. These free services, and many more, store and analyse your personal messages, search history, cloud photos, and the websites you visit. This information is readily available to governments, hackers, or really any business or person who is interested and willing to pay (law firms, journalists, advertisers, etc).

This is not news to most people. You have perhaps experienced an advertisement pop up suddenly related to a website you visited that you thought was private. You have probably had Facebook recommend new friends who you just met a week ago. However, these are all rather benign examples that don’t warrant paranoia over your digital security.

As part of my 2018 new years resolution I have been taking a closer look at my online privacy. Many people have questioned me on it and so I thought I would address it in a blog post. To begin with, I’d like to refer you to a great TED Talk on Why Privacy Matters. Take 20 minutes to watch it and come back.

Glenn Greenwald - TED - Why Privacy Matters

For those too lazy to click, Glenn Greenwald makes the point that we don’t behave the same way in the physical world and the virtual world. In the physical world, we lock our houses, cover our PIN at the ATM, close the curtains, don’t talk about business secrets in public, and use an empty room when having a private conversation. This is largely because we understand that in the physical world, we can open unlocked doors, glance at PIN keypads, peek through curtains, listen to company gossip, and overhear conversations.

In the virtual world, we are unfortunately uneducated about how to snoop on other’s private information. We assume that sending an email on Gmail is private, or opening an incognito mode browser hides everything. This is far from the truth: mass surveillance is relatively cheap and easy, and there are many organisations that are well invested in knowing how to snoop. However, for the most of us, we only experience this through tailored advertising. As a result, there is little motivation to care about privacy.

In this post, I will not talk about how you are tracked, or how to secure yourself. These are deep topics that deserve more discussion by themselves. However, I do want to talk about why privacy matters.

The right to privacy is a basic human right. Outside the obvious desire to hide company secrets, financial and medical information, we behave differently when we are being watched. You can watch adult videos if you close the door, buy different things if you don’t have a judgmental cashier, and talk about different things on the phone if you aren’t sitting on a train in public.

Again, these are benign and socially accepted norms. However, there are people living in countries where the norm is largely biased against their favour. Global issues like corruption and political oppression exist, even though many of us are lucky to turn a blind eye. Victims of these countries are censored, incarcerated, and killed. See for yourself where your country ranks in the list of freedom indices.

In these societies, a greater percentage of the population start to be impacted by the poor digital security that we practice. We can see this in the following graph, which shows the usage of The Tor Project, a tool that anonymises Internet traffic, correlating with political oppression (read the original study).

Correlation of Tor usage and political repression

Further investigation shows that Tor usage (see how Tor statistics are derived) similarly correlates to politically sensitive events. As of writing this post, I rewinded the clock to the three most recent political events that occurred in countries which experience censorship and political oppression.

First, we have the 19th National Congress of the Communist Party of China. You can see the tripling in activity as this event occurred. The red dots show potential censorship.

Chinese Tor usage spikes during the 19th National Congress of the Communist Party of China

Similarly, we can see a turbulent doubling in value during the blocks of social media and TV channels in Pakistan.

Pakistan Tor usage during the social media block

Finally, a spike of usage and statistically relevant censorship / release of censorship events during the anti-government protests in Iran.

Iran Tor usage spikes during Protests in Iran, blocking of various services including Tor

These three events were simply picked as the most three recent political events. Whether they are good or bad is largely irrelevant and I hold no opinion on them whatsoever. However, it is clear that others do have an opinion, and are using services like Tor as a reaction. Of course, it’s not just Tor. For example, a couple weeks ago, 30,000 Turks were incorrectly accused of treason from a 1×1 tracking pixel. This results in jobs, houses, and innocent lives being lost. In the US, Governors are still signing in support of Net Neutrality.

Despite these issues, there are those that believe that as long as we do not do anything bad, there is nothing to hide. Privacy tools are used by criminals, not the common population. This is also untrue. The definition of “bad” changes depending on who is in power, and criminals are motivated individuals who have much better privacy tools than most will ever have. Statistically, increasing the basic awareness of privacy does not increase criminal activity, but does increase protection of the unfairly oppressed.

Those who are fortunate enough to live a complacent digital life tend to decrease the average awareness of digital privacy. Just as we donate relief aid to countries that experience wars or natural disasters, we should promote awareness about digital freedom on the behalf of those who do not have it. Nurturing a more privacy aware generation -a generation who is born with a tablet in their hands- is a responsibility to ensure that social justice and the expression of the marginalised population remains possible.

Next up, I’ll talk a bit about what tracking does occur, and what privacy tools are out there.

The post Digital privacy is important, even though you think it doesn’t impact you appeared first on thinkMoult.

Breakdown of a photo-realistic image in Blender Cycles

Dion Moult
, 05/02/2018 | Source: thinkMoult

Recently, I wanted to produce a sample photo-realistic 3D scene with Blender’s Cycles engine that I could attempt to recreate in other rendering engines. I took an almost random photo of a street and kerb junction that is prolific throughout Sydney’s suburbs. Here’s that photo below. You can see incredible features that we take for granted such as the viscous bulging of the asphalt as it hits the kerb, dead eucalyptus leaves, a groove between two concrete blocks, and so on. It’s a slightly over-exposed shot, hence we have an unnaturally bright grass.

Source image

The resultant 3D equivalent is below, all modeled, textured, and rendered in Blender. I’ve thrown in a glossy Suzanne and sphere, as well as a creative oil slick on the asphalt. You can click on the images to see a high-resolution version.

Rendered image

The modeling itself is ridiculously easy. Excluding the particle systems and dummy meshes, the road and kerb adds up to 5 polygons. The split in the middle of the kerb is because I suspect the kerb rose in level a bit, although I ended up ignoring it. This is typically the level of detail you can expect from an architectural scene where only the road level and sidewalk level matters.

You’ll notice there are no lights. The photo was taken during an overcast sky, and so an overcast sky environment map (+-4 EV) was used for lighting. The environment map was largely untouched as it was an overcast sky, and so we don’t need to worry about the sun’s impact on the EV range.

Off to one side are some of the meshes used in the particle systems. This spot was below a eucalyptus tree, and so various eucalyptus leaves and other debris needed to be placed. The leaves, grass, and mulch are dumb planes, and only the leaves actually have a texture applied. The leaf texture was not a photo, and instead was from a beautiful eucalyptus leaf painting by a talented artist.

OpenGL render

The basic texture layer adds the first layer of realism. These are all pretty standard, such as using this seamless asphalt texture. I have assigned a diffuse and normal map, and did minor colour correction to the textures. What gives them that bit of realism is the dirt map I have painted for worn edges, which darken the values to represent the collection of dirt around edges, the gradient of dirt as water falls towards the kerb, and the evaporation of dirt as it washes up against the edge of the kerb before it finally spills over. Unlike its relative, the occlusion map (which is faking a lighting phenomenon), this dirt map actually does represent deposition of dirt and therefore a contrast between the sun-bleached material and the darkened dirty material. There is no specular map in this case, though there usually is for roads. The map is shown below.

Road dirt map

To show the contrast between the effect a dirt map applies and a flat texture, I’ve attached a work in progress screenshot below. You can see the road which has a dirt map applied in contrast to the very fake looking kerb.

Work in progress screenshot

The particle systems are what really give this scene a bit of life. There are 5 particle systems in total: dead eucalyptus leaves, mulch, long weedy Bermuda grass, short Bermuda grass, and dead grass fragments. They are all weight-painted to place them on the scene, with a noise texture to add colour variation to represent patchiness. An example of the weight paint for mulch, and dead grass is seen below.

Mulch weight paint

This gives a particle distribution which can be seen in the AO-pass below.

AO pass

That’s pretty much it! During compositing there was an AO pass multiplied, colour correction applied, a sharpen filter, as well as a slight lens distortion just for fun. A fully sized render takes about 10 minutes on my Gentoo machine.

The post Breakdown of a photo-realistic image in Blender Cycles appeared first on thinkMoult.