Planet Debian

Subscribe to Planet Debian feed
Planet Debian - https://planet.debian.org/
Updated: 19 min 16 sec ago

Gunnar Wolf: Yes! I am going to...

25 June, 2018 - 06:44

Having followed through some paperwork I was still missing...

I can finally say...

Dates

I’m going to DebCamp18! I should arrive at NCTU in the afternoon/evening of Tuesday, 2018-07-24.

I will spend a day prior to that in Tokio, visiting a friend and probably making micro-tourism.

My Agenda

Of course, DebCamp is not a vacation, so we expect people that take part of DebCamp to have at least a rough sketch of activities. There are many, many things I want to tackle, and experience shows there's only time for a fraction of what's planned. But lets try:

keyring-maint training
We want to add one more member to the keyring-maint group. There is a lot to prepare before any announcements, but I expect a good chunk of DebCamp to be spent explaining the details to a new team member.
DebConf organizing
While I'm no longer a core orga-team member, I am still quite attached to helping out during the conference. This year, I took the Content Team lead, and we will surely be ironing out details such as fixing schedule bugs.
Raspberry Pi images
I replied to Michael Stapelberg's call for adoption of the unofficial-but-blessed Raspberry Pi 3 disk images. I will surely be spending some time on that.
Key Signing Party Coordination
I just sent out the Call for keys for keysigning in Hsinchu, Taiwan. At that point, I expect very little work to be needed, but it will surely be on my radar.

Of course... I *do* want to spend some minutes outside NCTU and get to know a bit of Taiwan. This is my first time in East Asia, and don't know when, if ever, I will have the opportunity to be there again. So, I will try to have at least the time to enjoy a little bit of Taiwan!

Dirk Eddelbuettel: #19: Intel MKL in Debian / Ubuntu follow-up

25 June, 2018 - 04:41

Welcome to the (very brief) nineteenth post in the ruefully recalcitrant R reflections series of posts, or R4 for short.

About two months ago, in the most recent post in the series, #18, we provided a short tutorial about how to add the Intel Math Kernel Library to a Debian or Ubuntu system thanks to the wonderful apt tool -- and the prepackaged binaries by Intel. This made for a simple, reproducible, scriptable, and even reversible (!!) solution---which a few people seem to have appreciated. Good.

In the meantime, more good things happened. Debian maintainer Mo Zhou had posted this 'intent-to-package' bug report leading to this git repo on salsa and this set of packages currently in the 'NEW' package queue.

So stay tuned, "soon" (for various definitions of "soon") we should be able to directly get the MKL onto Debian systems via apt without needing Intel's repo. And in a release or two, Ubuntu should catch up. The fastest multithreaded BLAS and LAPACK for everybody, well-integrated and package. That said, it is still a monstrously large package so I mostly stick with the (truly open source rather than just 'gratis') OpenBLAS but hey, choice is good. And yes, technically these packages are 'outside' of Debian in the non-free section but they will be visible by almost all default configurations.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Russ Allbery: Review: The Trouble with Physics

24 June, 2018 - 11:07

Review: The Trouble with Physics, by Lee Smolin

Publisher: Mariner Copyright: 2006 Printing: 2007 ISBN: 0-618-91868-X Format: Trade paperback Pages: 355

A brief recap of the state of theoretical physics: Quantum mechanics and particle physics have settled on the standard model, which provides an apparently complete inventory of fundamental particles and explains three of the four fundamental forces. This has been very experimentally successful up to and including the recent tentative observation of the Higgs boson, one of the few predictions of the standard model that had yet to be confirmed by experiment. Meanwhile, Einstein's theory of general relativity continues as the accepted explanation of gravity, experimentally verified once again by LIGO and Virgo detection of gravitational waves.

However, there are problems. Perhaps the largest is the independence of these two branches of theoretical physics: quantum mechanics does not include or explain gravity, and general relativity does not sit easily alongside current quantum theory. This causes theoretical understanding to break down in situations where both theories need to be in play simultaneously, such as the very early universe or event horizons of black holes.

There are other problems within both theories as well. Astronomy shows that objects in the universe behave as if there is considerably more mass in galaxies than we've been able to observe (the dark matter problem), but we don't have a satisfying theory of what would make up that mass. Worse, the universe is expanding more rapidly than it should, requiring introduction of a "dark energy" concept with no good theoretical basis. And, on the particle physics side, the standard model requires a large number (around 20, depending on how you measure them) of apparently arbitrary free constants: numbers whose values don't appear to be predicted by any basic laws and therefore could theoretically be set to any value. Worse, if those values are set even very slightly differently than we observe in our universe, the nature of the universe would change beyond recognition. This is an extremely unsatisfying property for an apparently fundamental theory of nature.

Enter string theory, which is the dominant candidate for a deeper, unifying theory behind the standard model and general relativity that tries to account for at least some of these problems. And enter this book, which is a critique of string theory as both a scientific theory and a sociological force within the theoretical physics community.

I should admit up-front that Smolin's goal in writing this book is not the same as my goal in reading it. His primary concern is the hold that string theory has on theoretical physics and the possibility that it is stifling other productive avenues, instead spinning off more and more untestable theories that can be tweaked to explain any experimental result. It may even be leading people to argue against the principles of experimental science itself (more on that in a moment). But to mount his critique for the lay reader, he has to explain the foundations of both accepted theoretical physics and string theory (and a few of the competing alternative theories). That's what I was here for.

About a third of this book is a solid explanation of the history and current problems of theoretical physics for the lay person who is already familiar with basic quantum mechanics and general relativity. Smolin is a faculty member at the Perimeter Institution for Theoretical Physics and has done significant work in string theory, loop quantum gravity (one of the competing attempts to unify quantum mechanics and general relativity), and the (looking dubious) theory of doubly special relativity, so this is an engaged and opinionated overview from an active practitioner. He lays out the gaps in existing theories quite clearly, conveys some of the excitement and disappointment of recent (well, as of 2005) discoveries and unsolved problems, provides a solid if succinct summary of string theory, and manages all of that without relying on too much complex math. This is exactly the sort of thing I was looking for after Brian Greene's The Elegant Universe.

Another third of this book is a detailed critique of string theory, and specifically the assumption that string theory is correct despite its lack of testable predictions and its introduction of new problems. I noted in my review of Greene's book that I was baffled by his embrace of a theory that appears to add even more free variables than the standard model, an objection that he skipped over entirely. Smolin tackles this head-on, along with other troublesome aspects of a theory that is actually an almost infinitely flexible family of theories and whose theorized unification (M-theory) is still just an outline of a hoped-for idea.

The core of Smolin's technical objection to string theory is that it is background-dependent. Like quantum mechanics, it assumes a static space-time backdrop against which particle or string interactions happen. However, general relativity is background-independent; indeed, that's at the core of its theoretical beauty. It states that the shape of space-time itself changes, and is a participant in the physical effects we observe (such as gravity). Smolin argues passionately that background independence is a core requirement for any theory that aims to unify general relativity and quantum mechanics. As long as a theory remains background-dependent, it is, in his view, missing Einstein's key insight.

The core of his sociological objection is that he believes string theory has lost its grounding in experimental verification and has acquired far too much aura of certainty than it deserves given its current state, and has done so partly because of the mundane but pernicious effects of academic and research politics. On this topic, I don't know nearly enough to referee the debate, but his firm dismissal of attempts to justify string theory's weaknesses via the anthropic principle rings true to me. (The anthropic principle, briefly, is the idea that the large number of finely-tuned free constants in theories of physics need not indicate a shortcoming in the theory, but may be that way simply because, if they weren't, we wouldn't be here to observe them.) Smolin's argument is that no other great breakthroughs of physics have had to rely on that type of hand-waving, elegance of a theory isn't sufficient justification to reach for this sort of defense, and that to embrace the anthropic principle and its inherent non-refutability is to turn one's back on the practice of science. I suspect this ruffled some feathers, but Smolin put his finger squarely on the discomfort I feel whenever the anthropic principle comes up in scientific discussions.

The rest of the book lays out some alternatives to string theory and some interesting lines of investigation that, as Smolin puts it, may not pan out but at least are doing real science with falsifiable predictions. This is the place where the book shows its age, and where I frequently needed to do some fast Wikipedia searching. Most of the experiments Smolin points out have proven to be dead ends: we haven't found Lorentz violations, the Pioneer anomaly had an interesting but mundane explanation, and the predictions of modified Newtonian dynamics do not appear to be panning out. But I doubt this would trouble Smolin; as he says in the book, the key to physics for him is to make bold predictions that will often be proven wrong, but that can be experimentally tested one way or another. Most of them will lead to nothing but one can reach a definitive result, unlike theories with so many tunable parameters that all of their observable effects can be hidden.

Despite not having quite the focus I was looking for, I thoroughly enjoyed this book and only wish it were more recent. The physics was pitched at almost exactly the level I wanted. The sociology of theoretical physics was unexpected but fascinating in a different way, although I'm taking it with a grain of salt until I read some opposing views. It's an odd mix of topics, so I'm not sure if it's what any other reader would be looking for, but hopefully I've given enough of an outline above for you to know if you'd be interested.

I'm still looking for the modern sequel to One Two Three... Infinity, and I suspect I may be for my entire life. It's hard to find good popularizations of theoretical physics that aren't just more examples of watching people bounce balls on trains or stand on trampolines with bowling balls. This isn't exactly that, but it's a piece of it, and I'm glad I read it. And I wish Smolin the best of luck in his quest for falsifiable theories and doable experiments.

Rating: 8 out of 10

Hideki Yamane: OSSummit Japan 2018

24 June, 2018 - 09:33

I've participated OSSumit Japan 2018 as volunteer staff for three days.










 Some Debian developers (Jose from Microsoft and Michael from credativ) gave a talk during this event.

Got some stickers (why fedora? because I've got a help with introduce improvement from Fedora people as previously noted :)



Sven Hoexter: nginx, lua, uuid and a nchan bug

23 June, 2018 - 23:08

At work we're running nginx in several instances. Sometimes running on Debian/stretch (Woooh) and sometimes on Debian/jessie (Boooo). To improve our request tracking abilities we set out to add a header with a UUID version 4 if it does not exist yet. We expected this to be a story we could implemented in a few hours at most ...

/proc/sys/kernel/random/uuid vs lua uuid module

If you start to look around on how to implement it you might find out that there is a lua module to generate a UUID. Since this module is not packaged in Debian we started to think about packaging it, but on a second thought we wondered if simply reading from the Linux /proc interface isn't faster after all? So we build a very unscientific test case that we deemed good enough:

$ cat uuid_by_kernel.lua
#!/usr/bin/env lua5.1
local i = 0
repeat
  local f = assert(io.open("/proc/sys/kernel/random/uuid", "rb"))
  local content = f:read("*all")
  f:close()
  i = i + 1
until i == 1000


$ cat uuid_by_lua.lua
#!/usr/bin/env lua5.1
package.path = package.path .. ";/home/sven/uuid.lua"
local i = 0
repeat
  local uuid = require("uuid")
  local content = uuid()
  i = i + 1
until i == 1000

The result is in favour of using the Linux /proc interface:

$ time ./uuid_by_kernel.lua
real    0m0.013s
user    0m0.012s
sys 0m0.000s

$ time ./uuid_by_lua.lua
real    0m0.021s
user    0m0.016s
sys 0m0.004s
nginx in Debian/stretch vs nginx in Debian/jessie

Now that we had settled on the lua code

if (ngx.var.http_correlation_id == nil or ngx.var.http_correlation_id == "") then
  local f = assert(io.open("/proc/sys/kernel/random/uuid", "rb"))
  local content = f:read("*all")
  f:close()
    return content:sub(1, -2)
  else
    return ngx.var.http_correlation_id
end

and the nginx configuration

set_by_lua_file $ngx.var.http_correlation_id /etc/nginx/lua-scripts/lua_uuid.lua;

we started to roll this one out to our mixed setup of Debian/stretch and Debian/jessie hosts. While we tested this one on Debian/stretch, and it all worked fine, we never gave it a try on Debian/jessie. Within seconds of the rollout all our nginx instances on Debian/jessie started to segfault.

Half an hour later it was clear that the nginx release shipped in Debian/jessie does not yet allow you to write directly into the internal variable $ngx.var.http_correlation_id. To workaround this issue we configured nginx like this to use the add_header configuration option to create the header.

set_by_lua_file $header_correlation_id /etc/nginx/lua-scripts/lua_uuid.lua;
add_header correlation_id $header_correlation_id;

This configuration works on Debian/stretch and Debian/jessie.

Another possibility we considered was using the backported version of nginx. But this one depends on a newer openssl release. I didn't want to walk down the road of manually tracking potential openssl bugs against a release not supported by the official security team. So we rejected this option. Next item on the todo list is for sure the migration to Debian/stretch, which is overdue now anyway.

and it just stopped

A few hours later we found that the nginx running on Debian/stretch was still running, but no longer responding. Attaching strace revealed that all processes (worker and master) were waiting on a futex() call. Logs showed an assert pointing in the direction of the nchan module. I think the bug we're seeing is #446, I've added the few bits of additional information I could gather. We just moved on and disabled the module on our systems. Now it's running fine in all cases for a few weeks.

Kudos to Martin for walking down this muddy road together on a Friday.

Steinar H. Gunderson: Nageru deployments

23 June, 2018 - 16:45

As we're preparing our Nageru video chains for another Solskogen, I thought it worthwhile to make some short posts about deployments in the wild (neither of which I had much involvement with myself):

  • The Norwegian municipality of Frøya is live streaming streaming all of their council meetings using Nageru (Norwegian only). This is a fairly complex setup with a custom frontend controlling PTZ cameras, so that someone non-technical can just choose from a few select scenes and everything else just clicks into place.
  • Breizhcamp, a French technology conference, used Nageru in 2018, transitioning from OBS. If you speak French, you can watch their keynote about it (itself produced with Nageru) and all their other video online. Breizhcamp ran their own patched version of Nageru (available on Github); I've taken in most of their patches into the main repository, but not all of them yet.

Also, someone thought it was a good idea to take an old version of Nageru, strip all the version history and put it on Github with (apparently) no further changes. Like, what. :-)

Benjamin Mako Hill: I’m a maker, baby

23 June, 2018 - 06:34

 

What does the “maker movement” think of the song “Maker” by Fink?

Is it an accidental anthem or just unfortunate evidence of the semantic ambiguity around an overloaded term?

Lars Wirzenius: Ick ALPHA-6 released: CI/CD engine

21 June, 2018 - 23:34

It gives me no small amount of satisfaction to announce the ALPHA-6 version of ick, my fledgling continuous integration and deployment engine. Ick has been now deployed and used by other people than myself.

Ick can, right now:

  • Build system trees for containers.
  • Use system trees to run builds in containers.
  • Build Debian packages.
  • Publish Debian packages via its own APT repository.
  • Deploy to a production server.

There's still many missing features. Ick is by no means ready to replace your existing CI/CD system, but if you'd like to have a look at ick, and help us make it the CI/CD system of your dreams, now is a good time to give it a whirl.

(Big missing features: web UI, building for multiple CPU architectures, dependencies between projects, good documentation, a development community. I intend to make all of these happen in due time. Help would be welcome.)

John Goerzen: Making a difference

21 June, 2018 - 01:24

Every day, ask yourself this question: What one thing can I do today that will make this democracy stronger and honor and support its institutions? It doesn’t have to be a big thing. And it probably won’t shake the Earth. The aggregation of them will shake the Earth.

– Benjamin Wittes

I have written some over the past year or two about the dangers facing the country. I have become increasingly alarmed about the state of it. And that Benjamin Wittes quote, along with the terrible tragedy, spurred me to action. Among other things, I did two things I never have done before:

I registered to protest on June 30.

I volunteered to do phone banking with SwingLeft.

And I changed my voter registration from independent to Republican.

No, I have not gone insane. The reason for the latter is that here in Kansas, the Democrats rarely field candidates for most offices. The real action happens in the Republican primary. So if I can vote in that primary, I can have a voice in keeping the craziest of the crazy out of office. It’s not much, but it’s something.

Today we witnessed, hopefully, the first victory in our battle against the abusive practices happening to children at the southern border. Donald Trump caved, and in so doing, implicitly admitted the lies he and his administration have been telling about the situation. This only happened because enough people thought like Wittes: “I am small, but I can do SOMETHING.” When I called the three Washington offices of my senators and representatives — far-right Republicans all — it was apparent that I was by no means the first to give them an earful about this, and that they were changing their tone because of what they heard. Mind you, they hadn’t taken any ACTION yet, but the calls mattered. The reporting mattered. The attention mattered.

I am going to keep doing what little bit I can. I hope everyone else will too. Let us shake the Earth.

Julien Danjou: Stop merging your pull requests manually

20 June, 2018 - 22:53

If there's something that I hate, it's doing things manually when I know I could automate them. Am I alone in this situation? I doubt so.

Nevertheless, every day, they are thousands of developers using GitHub that are doing the same thing over and over again: they click on this button:

This does not make any sense.

Don't get me wrong. It makes sense to merge pull requests. It just does not make sense that someone has to push this damn button every time.

It does not make any sense because every development team in the world has a known list of pre-requisite before they merge a pull request. Those requirements are almost always the same, and it's something along those lines:

  • Is the test suite passing?
  • Is the documentation up to date?
  • Does this follow our code style guideline?
  • Have N developers reviewed this?

As this list gets longer, the merging process becomes more error-prone. "Oops, John just clicked on the merge button while there were not enough developer that reviewed the patch." Rings a bell?

In my team, we're like every team out there. We know what our criteria to merge some code into our repository are. That's why we set up a continuous integration system that runs our test suite each time somebody creates a pull request. We also require the code to be reviewed by 2 members of the team before it's approbated.

When those conditions are all set, I want the code to be merged.

Without clicking a single button.

That's exactly how Mergify started.

Mergify is a service that pushes that merge button for you. You define rules in the .mergify.yml file of your repository, and when the rules are satisfied, Mergify merges the pull request.

No need to press any button.

Take a random pull request, like this one:

This comes from a small project that does not have a lot of continuous integration services set up, just Travis. In this pull request, everything's green: one of the owners reviewed the code, and the tests are passing. Therefore, the code should be already merged: but it's there, hanging, chilling, waiting for someone to push that merge button. Someday.

With Mergify enabled, you'd just have to put this .mergify.yml a the root of the repository:

rules:
  default:
    protection:
      required_status_checks:
        contexts:
          - continuous-integration/travis-ci
      required_pull_request_reviews:
        required_approving_review_count: 1

With such a configuration, Mergify enables the desired restrictions, i.e., Travis passes, and at least one project member reviewed the code. As soon as those conditions are positive, the pull request is automatically merged.

We built Mergify as a free service for open-source projects. The engine powering the service is also open-source.

Now go check it out and stop letting those pull requests hang out one second more. Merge them!

If you have any question, feel free to ask us or write a comment below! And stay tuned — as Mergify offers a few other features that I can't wait to talk about!

Craig Small: Odd dependency on Google Chrome

20 June, 2018 - 18:21

For weeks I have had problems with Google Chrome. It would work very few times and then for reasons I didn’t understand, would stop working. On the command line you would get several screens of text, but never would the Chrome window appear.

So I tried the Beta, and it worked… once.

Deleted all the cache and configuration and it worked… once.

Every time the process would be in an infinite loop listening to a Unix socket (fd 7) but no window for the second and subsequent starts of Chrome.

By sheer luck in the screenfulls of spam I noticed this:

Gkr-Message: 21:07:10.883: secret service operation failed: The name org.freedesktop.secrets was not provided by any .service files

Hmm, so I noticed every time I started a fresh new Chrome, I logged into my Google account. So, once again clearing things I started Chrome, didn’t login and closed and reopened.  I had Chrome running the second time! Alas, not with all the stuff synchronised.

An issue for Mailspring put me onto the right path. installing gnome-keyring (or the dependencies p11-kit and gnome-keyring-pkcs11) fixed Chrome.

So if Chrome starts but you get no window, especially if you use cinnamon, try that trick.

 

 

Jonathan Carter: Plans for DebCamp18

20 June, 2018 - 15:32

Dates

I’m going to DebCamp18! I should arrive at NCTU around noon on Saturday, 2018-07-21.

My Agenda
  • DebConf Video: Research if/how MediaDrop can be used with existing Debian video archive backends (basically, just a bunch of files on http).
  • DebConf Video: Take a better look at PeerTube and prepare a summary/report for the video team so that we better know if/how we can use it for publishing videos.
  • Debian Live: I have a bunch of loose ideas that I’d like to formalize before then. At the very least I’d like to file a bunch of paper cut bugs for the live images that I just haven’t been getting to. Live team may also need some revitalization, and better co-ordination with packagers of the various desktop environments in terms of testing and release sign-offs. There’s a lot to figure out and this is great to do in person (might lead to a DebConf BoF as well).
  • Debian Live: Current live weekly images have Calamares installed, although it’s just a test and there’s no indication yet on whether it will be available on the beta or final release images, we’ll have to do a good assessment on all the consequences and weigh up what will work out the best. I want to put together an initial report with live team members who are around.
  • AIMS Desktop: Get core AIMS meta-packages in to Debian… no blockers on this but just haven’t had enough quite time to do it (And thanks to AIMS for covering my travel to Hsinchu!)
  • Get some help on ITPs that have been a little bit more tricky than expected:
    • gamemode – Adjust power saving and cpu governor settings when launching games
    • notepadqq – A linux clone of notepad++, a popular text editor on Windows
    • Possibly finish up zram-tools which I just don’t get the time for. It aims to be a set of utilities to manage compressed RAM disks that can be used for temporary space, compressed in-memory swap, etc.
  • Debian Package of the Day series: If there’s time and interest, make some in-person videos with maintainers about their packages.
  • Get to know more Debian people, relax and socialize!

Athos Ribeiro: Triggering Debian Builds on OBS

20 June, 2018 - 09:26

This is my fifth post of my Google Summer of Code 2018 series. Links for the previous posts can be found below:

My GSoC contributions can be seen at the following links

Debian builds on OBS

OBS supports building Debian packages. To do so, one must properly configure a project so OBS knows it is building a .deb package and to have the packages needed to handle and build debian packages installed.

openSUSE’s OBS instance has repositories for Debian 8, Debian 9, and Debian testing.

We will use base Debian projects in our OBS instance as Download on Demand projects and use subprojects to achieve our final goal (build packages agains Clang). By using the same configurations as the ones in the openSUSE public projects, we could perform builds in Debian 8 and Debian 9 in our local OBS deploys. However, builds for Debian Testing and Unstable were failing.

With further investigation, we realized the OBS version packaged in Debian cannot decompress control.tar.xz files in .deb packages, which is the default compression format for the control tarball since dpkg-1.19 (it used to be control.tar.gz before that). This issue was reported on the OBS repositories and was fixed on a Pull Request that is not included in the current Debian OBS version yet. For now, we apply this patch in our OBS instance on our salt states.

After applying the patch, the builds on Debian 8 and 9 are still finishing with success, but builds against Debian Testing and Unstable are getting stuck in a blocked state: dependencies are being downloaded, the OBS scheduler stalls for a while, the downloaded packages get cleaned up, and then the dependencies are downloaded again. OBS backend enters in a loop doing the described procedure and never assigns a build to a worker. No logs with hints leading to a possible issue are issued, giving us no clue of the current problem.

Although I am inclined to believe we have a problem with our dependencies list, I am still debugging this issue during this week and will bring more news on my next post.

Refactoring project configuration files

Reshabh opened a Pull Request in our salt repository with the OBS configuration files for Ubuntu, also based on the openSUSE’s OBS public configurations. Based on Sylvestre comments, I have been refactoring the Debian configuration files based on the OBS docuemtation. One of the proposed improvements is to use debootstrap to mount the builder chroot. This will allow us to reduce the number of dependencies listed in the projects configuration files. The issue which generated debootstrap support in OBS is available at https://github.com/openSUSE/obs-build/issues/111 and may lead to more interesting resources on the matter.

Next steps (A TODO list to keep on the radar)
  • Fix OBS builds on Debian Testing and Unstable
  • Write patches for the OBS worker issue described in post 3
  • Change the default builder to perform builds with clang
  • Trigger new builds by using the dak/mailing lists messages
  • Verify the rake-tasks.sh script idempotency and propose patch to opencollab repository
  • Separate salt recipes for workers and server (locally)
  • Properly set hostnames (locally)

Shashank Kumar: Google Summer of Code 2018 with Debian - Week 5

20 June, 2018 - 01:30

During week 5, there were 3 merge requests undergoing review process simultaneously. I learned a lot about how code should be written in order to assist the reader since the code is read more times than the time it is written.

Services and Utility

After the user has entered their information on the signin or signup screen, the job of querying the database was given to a module named updatedb. The job of updatedb was to clean user input, hash password, query the database and respond with appropriate result after the database query is executed. In a discussion with Sanyam, he said updatedb doesn't conform to its name with what functions it incorporated. And explained the virtue of Service and Utility modules/functions and that this is the best place to restructure code with the same.

Utility functions can be described roughly as the functions which perform some operations on the data without caring much about the relationship of the data with respect to the application. So, generating uuid, cleaning email address, cleaning full name and hashing password becomes out utility functions and can be seen in utils.py for signup and similarly for signin.

Service functions can be described roughly as the functions which while performing operations on the data take their relationship with the application into account. Hence, these functions are not generic and application specific. sign_up_user is one such service function which received user information, calls utility functions to modify that information, query the database with respect to the signup operation i.e. adding the new user's detail to the database or raise SignUpError if details are already present. This can be seen in services module for signup and signin as well.

Persisting database connection

This is how the connection to the database used to work before the review. The settings module used to create the connection to the database, create table schema if not present and close the connection. Few constants are saved in the module to be used by signup and signin in order to connect to the database. But, the problem is, now database connection has to be established everytime there's a query to be executed by the services of signup or signin. Since the sqlite3 database is saved in a file alongside the application, I though it'll not be a problem to make connection whenever needed. But it overhead on the OS now which can slow down the application when scaled. To resolve this, now settings return the connection object which can be used again in any other module.

Integrating SignUp with Dashboard

While the SignUp feature was being reviewed the Dashbaord was merged and I had to refactor SignUp merge request accordingly. The natural flow of this should be the SignUp being the default screen up on the UI and after successful signup operation the Dashboard should be displayed. To achieve such a flow, I used screen manager which handles different screens and transition between them with predefined animation. This is defined in main module and the entire flow can be seen in action below.

Your browser does not support HTML5 video. Designing Tutorials and Tools menu

Once user is on the Dashboard, they have an option of picking up from different modules and going through the tutorials and tools available in the respective modules. The idea is to display difficulty tip as well so it becomes easier for the user to begin. Hence, below is what I've designed in order to incorporate the same.

Implementing Tutorials and Tools menu

Now comes the fun part, thinking about the architecture of the modules just designed in order for them to take shape of some code in the application. The idea here is to define them in a json file to be picked from the respective module afterwards. This way it'll be easier to add new tutorials and tools and hence we have this resultant json. The developement of this feature can be followed on this merge request

Now remains the quest to design and implement the structure of tutorials which can be generalized in a way that it can be populated using a json file. This will provide flexibility to the developer of tutorials and a UI module can also be implemented to modify this json to add new tutorials without even knowing how to code. Sounds amazing right? We'll see how it works out soon. If you have any suggestions this make sure to comment down below, on the merge request or reach out to me.

The Conclusion

Since the SignUp has also been merged I'll have to refactor SignIn now to integrate all of it in one happy application and complete the natural flow of things. Also, the design and development of tools/tutorials is underway and by the next blog is out you might be able to test the application with atleast one tool or tutorial from one of the modules on the dashboard.

Benjamin Mako Hill: How markets coopted free software’s most powerful weapon (LibrePlanet 2018 Keynote)

20 June, 2018 - 01:03

Several months ago, I gave the closing keynote address at LibrePlanet 2018. The talk was about the thing that scares me most about the future of free culture, free software, and peer production.

A video of the talk is online on Youtube and available as WebM video file (both links should skip the first 3m 19s of thanks and introductions).

Here’s a summary of the talk:

App stores and the so-called “sharing economy” are two examples of business models that rely on techniques for the mass aggregation of distributed participation over the Internet and that simply didn’t exist a decade ago. In my talk, I argue that the firms pioneering these new models have learned and adapted processes from commons-based peer production projects like free software, Wikipedia, and CouchSurfing.

The result is an important shift: A decade ago,  the kind of mass collaboration that made Wikipedia, GNU/Linux, or Couchsurfing possible was the exclusive domain of people producing freely and openly in commons. Not only is this no longer true, new proprietary, firm-controlled, and money-based models are increasingly replacing, displacing, outcompeting, and potentially reducing what’s available in the commons. For example, the number of people joining Couchsurfing to host others seems to have been in decline since Airbnb began its own meteoric growth.

In the talk, I talk about how this happened and what I think it means for folks of that are committed to working in commons. I talk a little bit about the free culture and free software should do now that mass collaboration, these communities’ most powerful weapon, is being used against them.

I’m very much interested in feedback provided any way you want to reach me including in person, over email, in comments on my blog, on Mastodon, on Twitter, etc.

Work on the research that is reflected and described in this talk was supported by the National Science Foundation (awards IIS-1617129 and IIS-1617468). Some of the initial ideas behind this talk were developed while working on this paper (official link) which was led by Maximilian Klein and contributed to by Jinhao Zhao, Jiajun Ni, Isaac Johnson, and Haiyi Zhu.

Sean Whitton: I'm going to DebCamp18, Hsinchu, Taiwan

19 June, 2018 - 22:43

Here’s what I’m planning to work on – please get in touch if you want to get involved with any of these items.

DebCamp work Throughout DebCamp and DebConf
  • Debian Policy: sticky bugs; process; participation; translations

  • Helping people use dgit and git-debrebase

    • Writing up or following up on feature requests and bugs

    • Design work with Ian and others

Raphaël Hertzog: Freexian’s report about Debian Long Term Support, May 2018

19 June, 2018 - 15:27

Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In May, about 202 work hours have been dispatched among 12 paid contributors. Their reports are available:

Evolution of the situation

The number of sponsored hours increased to 190 hours per month thanks to a few new sponsors who joined to benefit from Wheezy’s Extended LTS support.

We are currently in a transition phase. Wheezy is no longer supported by the LTS team and the LTS team will soon take over security support of Debian 8 Jessie from Debian’s regular security team.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Erich Schubert: Predatory publishers: SciencePG

19 June, 2018 - 15:12

I got spammed again by SciencePG (“Science Publishing Group”).

One of many (usually Chinese or Indian) fake publishers, that will publish anything as long as you pay their fees. But, unfortunately, once you published a few papers, you inevitably land on their spam list: they scrape the websites of good journals for email adresses, and you do want your contact email address on your papers.

However, this one is particularly hilarious: They have a spelling error right at the top of their home page!

Fail.

Speaking of fake publishers. Here is another fun example:

Kim Kardashian, Satoshi Nakamoto, Tomas Pluskal
Wanion: Refinement of RPCs.
Drug Des Int Prop Int J 1(3)- 2018. DDIPIJ.MS.ID.000112.

Yes, that is a paper in the “Drug Designing & Intellectual Properties” International (Fake) Journal. And the content is a typical SciGen generated paper that throws around random computer buzzword and makes absolutely no sense. Not even the abstract. The references are also just made up. And so are the first two authors, VIP Kim Kardashian and missing Bitcoin inventor Satoshi Nakamoto…

In the PDF version, the first headline is “Introductiom”, with “m”…

So Lupine Publishers is another predatory publisher, that does not peer review, nor check if the article is on topic for the journal.

Via Retraction Watch

Conclusion: just because it was published somewhere does not mean this is real, or correct, or peer reviewed…

Reproducible builds folks: Reproducible Builds: Weekly report #164

19 June, 2018 - 14:40

Here’s what happened in the Reproducible Builds effort between Sunday June 10 and Saturday June 16 2018:

diffoscope development

diffoscope is our in-depth “diff-on-steroids” utility which helps us diagnose reproducibility issues in packages. This week, version 96 was uploaded to Debian unstable by Chris Lamb. It includes contributions already covered by posts in previous weeks as well as new ones from:

tests.reproducible-builds.org development

There were a number of changes to our Jenkins-based testing framework that powers tests.reproducible-builds.org, including:

Packages reviewed and fixed, and bugs filed Misc.

This week’s edition was written by Bernhard M. Wiedemann, Chris Lamb & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Arthur Del Esposte: GSoC Status Update - First Month

19 June, 2018 - 10:00

In the past month I have been working on my GSoC project in Debian’s Distro Tracker. This project aims at designing and implementing new features in Distro Tracker to better support Debian teams to track the health of their packages and to prioritize their work efforts. In this post, I will describe the current status of my contributions, highlight the main challenges, and point the next steps.

Work Management and Communication

I communicate with Lucas Kanashiro (my mentor) constantly via IRC and personally at least once a week as we live in the same city. We have a weekly meeting with Raphael Hertzog at #debian-qa IRC channel to report advances, collect feedback, solve technical doubts, and planning the next steps.

I created a new repository in Salsa to save the log of our IRC meetings and to track my tasks through the repository’s issue tracker.Besides that, once a month I’ll post a new status update in my blog, such as this one, with more details regarding my contributions.

Advances

When GSoC officially started, Distro Tracker already had some team-related features. Briefly, a team is an entity composed by one or more users that are interested in the same set of packages. Teams are created manually by users and anyone may join public teams. The team page aggregates some basic information about the team and the list of packages of interest.

Distro Tracker offers a page to enable users to browser public teams which shows a paginated, sorted list of names. It used to be hard to find a team based on this list since Distro Tracker has more 110 teams distributed over 6 pages. In this sense, I created a new search field with auto-complete on the top of teams page to enable users to find a team’s page faster, as show in the following figure:

Also, I have been working on improving the current teams infrastructure to enable Debian’s teams to better track the health of their packages. Initially, we decided to use the current data available in Distro Tracker to create the first version of a new team’s page based on PET.

Presenting team’s packages data in a table on the team’s page would be a relatively trivial task. However, Distro Tracker architecture aims to provide a generic core which can be extended through specific distro applications, such as Kali Linux. The core source code provides generic infrastructure to import data related to deb packages and also to present them in HTML pages. Therefore, we had to consider this Distro Tracker requirement to properly provide a extensible infrastructure to show packages data through tables in so that it would be easy to add new table fields and to change the default behavior of existing columns provided by the core source code.

So, based on the previously existing panels feature and on Hertzog’s suggestions, I designed and developed a framework to create customizable package tables for teams. This framework is composed of two main classes:

  • BaseTableField - A base class representing fields to be displayed on package tables. Among other things, it must define the column name and a template to render the cell content for a package.
  • BasePackageTable - A base class representing package tables which are displayed on a team page. It may have several BaseTableFields to display package’s information. Different tables may show a different list of packages based on its scope.

We have been discussing my implementation in an open Merge Request, although we are very close to the version that should be incorporated. The following figures show the comparison between the earlier PET’s table and our current implementation.

PET Packages Table Distro Tracker Packages Table

Currently, the team’s page only have one table, which displays all packages related to that team. We are already presenting a very similar set of data to PET’s table. More specifically, the following columns are shown:

  • Package - displays the package name on the cell. It is implemented by the core’s GeneralInformationTableField class
  • VCS - by default, it displays the type of package’s repository (i.e. GIT, SVN) or Unknown. It is implemented by the core’s VcsTableField class. However, Debian app extend this behavior by adding the changelog version on the latest repository tag and displaying issues identified by Debian’s VCS Watch.
  • Archive - displays the package version on distro archive. It is implemented by the core’s ArchiveTableField class.
  • Bugs - displays the total number of bugs of a package. It is implemented by the core’s BugsTableField class. Ideally, each third-party apps should extend this field table to both add links for their bug tracker system.
  • Upstream - displays the upstream latest version available. This is a specific table field implemented by Debian app since this data is imported through Debian-specific tasks. In this sense, it is not available for other distros.

As the table’s cells are small to present detailed information, we have added Popper.js, a javascript library to display popovers. In this sense, some columns show a popover with more details regarding its content which is displayed on mouse hover. The following figure shows the popover to the Package column:

In additional to designing the table framework, the main challenge were to avoid the N+1 problem which introduces performance issues since for a set of N packages displayed in a table, each field element must perform 1 or more lookup for additional data for a given package. To solve this problem, each subclass of BaseTableField must define a set of Django’s Prefetch objects to enable BasePackageTable objects to load all required data in batch in advance through prefetch_related, as listed bellow.

class BasePackageTable(metaclass=PluginRegistry):
    @property
    def packages_with_prefetch_related(self):
        """
        Returns the list of packages with prefetched relationships defined by
        table fields
        """
        package_query_set = self.packages
        for field in self.table_fields:
            for l in field.prefetch_related_lookups:
                package_query_set = package_query_set.prefetch_related(l)

        additional_data, implemented = vendor.call(
            'additional_prefetch_related_lookups'
        )
        if implemented and additional_data:
            for l in additional_data:
                package_query_set = package_query_set.prefetch_related(l)
        return package_query_set

    @property
    def packages(self):
        """
        Returns the list of packages shown in the table. One may define this
        based on the scope
        """
        return PackageName.objects.all().order_by('name')


class ArchiveTableField(BaseTableField):
    prefetch_related_lookups = [
        Prefetch(
            'data',
            queryset=PackageData.objects.filter(key='general'),
            to_attr='general_archive_data'
        ),
        Prefetch(
            'data',
            queryset=PackageData.objects.filter(key='versions'),
            to_attr='versions'
        )
    ]

    @cached_property
    def context(self):
        try:
            info = self.package.general_archive_data[0]
        except IndexError:
            # There is no general info for the package
            return

        general = info.value

        try:
            info = self.package.versions[0].value
            general['default_pool_url'] = info['default_pool_url']
        except IndexError:
            # There is no versions info for the package
            general['default_pool_url'] = '#'

        return general

Finally, it is worth noticing that we also improved the team’s management page by moving all team management features to a single page and improving its visual structure:

Next Steps

Now, we are moving towards adding other tables with different scopes, such as the tables presented by PET:

To this end, we will introduce the Tag model class to categorize the packages based on their characteristics. Thus, we will create an additional task responsible for tagging packages based on their available data. The relationship between packages and tags should be ManyToMany. In the end, we want to perform a simple query to define the scope of a new table, such as the following example to query all packages with Release Critical (RC) bugs:

class RCPackageTable(BasePackageTable):
    def packages(self):
      tag = Tag.objects.filter(name='rc-bugs')
      return tag.packages.all()

We probably will need to work on Debian’s VCSWatch to enable it to receive update through Salsa’s webhook, especially for real-time monitoring of repositories.

Let’s get moving on! \m/

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้