Predictive Policing - The Algorithm Knows When And Where You Will Commit Your Crime - Critical View Of New Possibilities In Police Work Using The Example Precobs.

Philipp Reichenbach

Affiliation: Albert-Ludwigs-Universität Freiburg

DOI: 10.17160/josha.4.3.308

Languages: German

Since the 19th-century police work has been based on the use and analysis of criminal data. Today these maps are created by software using self-learning algorithms, which are designed to enable not only data analysis but also predictions about the future. This software, based on the latest advances in data science is called predictive policing. It is a multidisciplinary approach involving the collection of large amounts of data as well as its systematically filtered evaluation and criminal theories. On the one hand, predictive policing software opens opportunities for making police work more efficient, reducing crime rates and increasing security at focal points. On the other hand, it is necessary to discuss critical issues like the possibility of false interpretation, poor data protection, privacy issues and racism.


Towards Simulating Carcinogenesis: Modeling and Simulating Carcinogenesis, Hematopoietic Tissue Homeostasis and Leukemogenesis

Jenny Groten, Maximilian Georg, Oliver Worm, Christoph Borner et al.

Affiliation: Institut für Molekulare Medizin und Zellforschung

DOI: 10.17160/josha.3.7.253

Languages: English

The previously identified cancer hallmarks (Groten et al. 2016, DOI: 10.17160/josha.3.7.252 ) were described by mathematical algorithms. Subsequently, a computational simulation of carcinogenesis has been developed. In the next step, the proposed algorithms and correlations have been tested, validated and adapted through the simulation (http://mertelsmann.psiori.com/). In a second model, we transferred the insights won from the first simulation to the simulation of hematopoiesis tissue homeostasis and leukemogenesis (http://hem-model.psiori.com/hema_simulation). Our findings indicate that the 10 “Hallmarks” proposed by Hanahan and Weinberg can be assigned to two major groups, “Growth/Apoptosis Balance” and “Genetic Fidelity, Immortality”. Modeling Hematopoiesis revealed one missing Hallmark, “Block of Differentiation”, which we propose to assign to the broader term “Stem Cell Features”.


Rechenwelten. Computersimulationen machen komplexe Systeme greifbar Mathematical worlds. Computer Simulations allow to comprehend complex systems

Manuela Lenzen

DOI: 10.17160/josha.3.4.211

Languages: German

The first simulation experiments were performed early in the 20th century. But it was with the development of high performance computing that simulations became a powerful tool in science and engineering. Simulation experiments have some obvious advantages: they are cheaper and easier to achieve than real world experiments, and they allow testing for dangerous outcomes. Their main application consists in simulating complex processes that cannot be calculated right away. To be simulated, a problem has to be given an appropriate mathematical form; the simulation will then be able to approximate possible behaviours of the simulated system. For the philosophy of science, simulation experiments bear questions like: Do simulations really help to understand the ongoing processes? How can one know that the simulated process equals the real process in relevant ways? INSTITUTION: Zentrum für interdisziplinäre Forschung der Universität Bielefeld, 33615 Bielefeld, GERMANY


Cancer: Modeling evolution and natural selection, the „Mitosis Game“

Roland Mertelsmann, Maximilian Georg

Affiliation: Albert-Ludwigs Universität, Freiburg

DOI: 10.17160/josha.3.1.100

Languages: English

We have previously analyzed and discussed the importance of cellular evolution for oncogenesis and the clinical course of malignant disease. In view of the complexity of the genetic phenomena and the effects of environment and chance, we have designed a tool to both, better understand and to facilitate studying evolution in silico. After review of the literature and of evolutionary algorithms (see Reading List in pdf document) we have developed a conceptual framework for describing, understanding and modeling evolutionary processes. As a result, we have identified ten key intrinsic parameters of cells, which we would like to call “The Hallmarks of Evolution”.


Tiefes Reinforcement Lernen auf Basis visueller Wahrnehmungen

Sascha Lange

DOI: 10.17160/josha.1.1.7

Languages: German

Die Relevanz tiefer Autoencoder für das optimierende Lernen konnte in dieser Arbeit klar bestätigt werden. DFQ ist ein erster Algorithmus, der von der Leistungsfähigkeit der Autoencoder profitiert, das Lernen auf hochdimensionalen Eingabedaten ermöglicht und so die Grenzen für das wertfunktionsbasierte Reinforcement Lernen deutlich verschiebt. DFQ wurde nicht nur erfolgreich auf realistische, aber synthetische Bilddaten angewendet, sondern es wurden bereits auch eindrucksvolle Ergebnisse auf realen Anwendungen erzielt, die sich mit den Ergebnissen klassischer Ansätze messen lassen können. DFQ hat sich hierbei als vielversprechender Ansatz erwiesen mit vielen sich eröffnenden, weiterführenden Forschungsmöglichkeiten. Die eigenen Erwartungen wurden aber bereits jetzt deutlich übertroffen. Es ist nun möglich, direkt auf unvorverarbeite- ten Bilddaten optimierendes Lernen zu betreiben und so gute Strategien zur Steuerung realer Systeme direkt auf Basis visueller Wahrnehmungen zu erlernen.