Termine Vorträge (Archiv)

Treffer 1 - 50 von 284 Ergebnissen

  • Termin
     
    Titel
  • 26.09.2023
    10:30 - 11:00
    BSc
    Arithmetic in Uniform TC⁰ and First Order Logic with Counting
    Noah Peil
    Threshold circuits are a well-studied extension of the computational model of Boolean circuits. TC⁰ is the class of languages that can be decided by a family of polynomial-size, bounded-depth threshold circuits. If the family itself is easily computable, we speak of uniform TC⁰. It is a well-known fact that the basic arithmetic functions like addition, subtraction, and multiplication on the binary representations of integers can be computed by polynomial-size, bounded-depth threshold circuits. However, the aspect of uniformity is mostly neglected in the existing literature. In this talk, we investigate in detail how these arithmetic functions can be computed in uniform TC⁰. This class has a natural characterisation in terms of first-order logic with counting (FO+C), which allows us to analyse these functions from a purely logical perspective. Thereto, we introduce FO+C as well as a framework for expressing arithmetic in this logic. We see that if we encode binary numbers by means of relation variables, certain arithmetic operations like addition and subtraction can easily be expressed using FO+C-formulas. We also consider more involved operations like multiplication and division and discuss how they can be carried out in this framework.
  • 26.09.2023
    11:00 - 11:30
    BSc
    Clustering in a Relational Data World
    Kristina Ishii
    The main focus of this thesis is giving a detailed presentation of the relational k-means++ algorithm introduced by Moseley et al. [28], which is a part of a relational k-means approximation algorithm the authors present. This is an algorithm that runs directly on a relational database without the need to perform a join to convert it into a matrix whose rows represent the data points. We present the relational algorithms model as introduced by Moseley et al. [28] – calling an algorithm relational if it runs in polynomial time on relational input if the underlying database is acyclic. A relational algorithm that avoids a join can in this way offer a running time that is potentially exponentially smaller than the number of data points to be clustered. The relational k-means++ algorithm we present constructs a laminar set of hyperrectangles around previously sampled centres to define a probability distribution for the sampling of the next centre and uses this distribution and rejection sampling to sample the next centre. Apart from the presentation of the algorithm, the thesis introduces functional aggregation queries, which are a formal abstraction describing the fundamental building blocks of many relational algorithms. In the final part of the thesis, we leave centroid-based clustering behind us to assess the steps required to develop a relational algorithm for spectral clustering.
  • 26.09.2023
    10:30 - 11:00
    BSc
    Arithmetic in Uniform TC⁰ and First Order Logic with Counting
    Noah Peil
    Threshold circuits are a well-studied extension of the computational model of Boolean circuits. TC⁰ is the class of languages that can be decided by a family of polynomial-size, bounded-depth threshold circuits. If the family itself is easily computable, we speak of uniform TC⁰. It is a well-known fact that the basic arithmetic functions like addition, subtraction, and multiplication on the binary representations of integers can be computed by polynomial-size, bounded-depth threshold circuits. However, the aspect of uniformity is mostly neglected in the existing literature. In this talk, we investigate in detail how these arithmetic functions can be computed in uniform TC⁰. This class has a natural characterisation in terms of first-order logic with counting (FO+C), which allows us to analyse these functions from a purely logical perspective. Thereto, we introduce FO+C as well as a framework for expressing arithmetic in this logic. We see that if we encode binary numbers by means of relation variables, certain arithmetic operations like addition and subtraction can easily be expressed using FO+C-formulas. We also consider more involved operations like multiplication and division and discuss how they can be carried out in this framework.
  • 26.09.2023
    11:00 - 11:30
    BSc
    Clustering in a Relational Data World
    Kristina Ishii
    The main focus of this thesis is giving a detailed presentation of the relational k-means++ algorithm introduced by Moseley et al. [28], which is a part of a relational k-means approximation algorithm the authors present. This is an algorithm that runs directly on a relational database without the need to perform a join to convert it into a matrix whose rows represent the data points. We present the relational algorithms model as introduced by Moseley et al. [28] – calling an algorithm relational if it runs in polynomial time on relational input if the underlying database is acyclic. A relational algorithm that avoids a join can in this way offer a running time that is potentially exponentially smaller than the number of data points to be clustered. The relational k-means++ algorithm we present constructs a laminar set of hyperrectangles around previously sampled centres to define a probability distribution for the sampling of the next centre and uses this distribution and rejection sampling to sample the next centre. Apart from the presentation of the algorithm, the thesis introduces functional aggregation queries, which are a formal abstraction describing the fundamental building blocks of many relational algorithms. In the final part of the thesis, we leave centroid-based clustering behind us to assess the steps required to develop a relational algorithm for spectral clustering.
  • 22.09.2023
    10:30 - 11:30
    BSc
    The VC Dimension of Counting Logic
    Moritz Effen
    [bachelor]

    This thesis uses the Vapnik–Chervonenkis (VC) dimension to examine the expressiveness of counting logic on graphs. The VC dimension measures the complexity of a class of boolean functions. Applied to graphs, it shows how well a class is suited to distinguish graphs. Counting logic (C) is an extension of first-order logic that can express boolean functions. The expressive power of counting logic is closely related to the Weisfeiler–Leman (WL) algorithm and graph neural networks (GNNs). We establish lower and upper bounds for fragments of counting logic, where the quantifier rank and the number of variables are limited, to assess the impact of these parameters on the expressiveness of C. Furthermore, we distinguish different classes of graphs, for which different parameters are limited. 

    First, bounds are shown that are characterised by WL. Afterwards, an approach is shown that uses the partition function recursively to construct a high number of distinguishable graphs, leading to a lower bound. This bound is further evaluated by an experimental evaluation. Furthermore, recent results on how the VC dimension of GNNs can be characterised by the bitlength of their weights are shown and linked to counting logic. This paper additionally introduces an approach for this result where the graphs only use a polynomial number of vertices in the bitlength by constructing a GNN that binary counts on corresponding graphs. At last, recent results for an upper bound on the VC dimension of GNNs are shown and how they relate to counting logic.
  • 15.09.2023
    10:30 - 11:15
     
    Selecting Graph Isomorphism Solvers with Machine Learning
    Thomas Hsu
    [Master] Deciding on graph isomorphism is a non-trivial matter and over the years, many different tools have been developed to efficiently solve the graph isomorphism problem. These tools have varying performances depending on the input graphs. Knowing the fastest solver for given graphs beforehand could optimize graph isomorphism testing, which can save a significant amount of time and resources. In this work, numerous graphs from a large variety of graph classes are each solved with multiple different tools. The running times of these tools are collected and used to train various models with different neural network architectures. Although the selection of the fastest solver displays promising results, current neural network architectures appear unsuited for this task. They are unable to capture some crucial properties of the graphs. These properties are the foundation of essential techniques regarding graph isomorphism
  • 15.09.2023
    11:15 - 12:00
    BSc
    The Parameterized Inapproximability of the Clique Problem
    Lennard Schober

    Die Aufgabe vonk-Clique besteht darin zu entscheiden, ob ein gegebener einfacher GraphGeine Clique der Größekenthält. Eine Clique ist ein Teilgraph, in dem zwischen jedem
    Knotenpaar eine Kante existiert. Angenommen die größte Clique inGenthältkKnoten, dann
    betrachten wirf(k)-fpt-Approximationsalgorithmen des parametrisierten Cliquenproblems,
    die auf jedem GraphenGeine Clique der Größe mindestensk/f(k)und höchstenskin fpt-Zeit
    ausgeben. Bei der parametrisierten Version vonk-Clique wirdkals zusätzlicher Parameter zur
    Eingabelänge für die Komplexitätsanalyse verwendet.
    In diesem Vortrag werden fundamentale Begriffe der parametrisierten Komplexitätstheorie
    und Approximationstheorie eingeführt. Im Hauptteil werden die neuesten Ergebnisse
    bezüglich der parametrisierten Approximation vonk-Clique präsentiert, wobei im Fokus der
    Beweis von Lin (STOC 2021) steht, dass das parametrisierte Cliquenproblem nicht konstant in
    fpt-Zeit approximiert werden kann, sofernFPT !W[1]gilt.
  • 04.09.2023
    14:00 - 15:00
    BSc
    A Framework for Graph Similarity Computations
    Viktor Panayotov

    In the realm of data analysis and pattern recognition, graph similarity is a pivotal problem that aims to quantify the degree of likeness between two graphs. While graph isomorphism is the most widely known problem that determines whether two graphs are structurally equivalent, its edge-preserving nature makes it quite restricting and unfit for the majority of data stemming from real-world application areas, primarily due to the presence of noise and variations. Hence, in our presentation, we delve into inexact graph matching techniques that are more flexible in determining the degree of similarity between two graphs, employing various similarity metrics. Due to the NP-hard nature of the graph similarity problem, there are many different approaches that aim to address it. In this talk, we describe and examine two variants of a depth-first graph edit distance algorithm and one algorithm based on fractional isomorphism. Moreover, we introduce the cornerstone of the thesis: a framework developed to empirically evaluate diverse graph similarity algorithms using benchmark sets. We present its core functionalities, encompassing general evaluation of algorithms, conducting classification experiments, and the visualization of the graphs used in the evaluation process. Furthermore, we discuss the evaluation results of the three graph similarity algorithms provided by the framework.
  • 04.09.2023
    14:45 - 15:45
    MSc
    Optimal Transport Distance for Graphs
    Christoph Sandrock
    Graph comparison is the problem of determining how similar or different graphs are. With the increased use of structured data in machine learning, this problem has received much attention in recent years. The Graph Optimal Transport (GOT) framework introduces a novel graph distance that combines ideas from optimal transport and graph signal distributions. The authors claim that their approach yields a structurally meaningful distance between graphs. In a more recent paper, they proposed the filter Graph Optimal Transport (fGOT) framework as an extension of the GOT framework. This extension allows to compare graphs of different sizes and introduces different filters for the graph signals. This thesis analyzes both frameworks. We first set them in the context of existing distance measures and show parallels to the commonly used Frobenius distance. Moreover, we propose an extension that incorporates node labels in the framework. Finally, we experimentally evaluate the fGOT framework in several graph classification tasks and compare its performance with state-of-the-art graph kernels.
  • 22.08.2023
    15:00 - 15:30
     
    Sampling SAT-Solutions (B.Sc. Simon Wilmes)
  • 22.08.2023
    15:45 - 16:15
    MSc
    Untangling a Clustering Algorithm
    Jonas Groven
     In their 2020 paper textit{Clustering with Tangles: Algorithmic Framework and Theoretical Guarantees}, Klepper et al. introduce an unconventional framework for clustering that is applicable to the use-cases of binary questionnaires, graphs and feature-based data. Moreover the framework is largely independent from the actual data which allows it to be easily extended to new use-cases. Unlike conventional clustering algorithms, instead of constructing a clustering directly on the given dataset, this framework constructs tangles over the given dataset and associates clusters with them. As constructing all possible tangles over the given dataset is computational expensive, Klepper et al. only consider tangles consisting of orientation of bi-partitions from a given set of bi-partitions and furthermore bound the sizes of regions tangles are allowed to point to. With these restrictions, the frameworks constructs all remaining tangles recursively by adding orientations from bi-partitions in the given set of bi-partitions in order of costs. This recursive computation induces a tree structure where each vertex corresponds to a tangle and each leaf corresponds to a tangle that cannot be extended by another orientation and is interpreted as a cluster. To assign each data point to a cluster, the computed tree is traversed as a decision tree with branch probabilities based on memberships probabilities in the orientations corresponding to the branch. Since constructing tangles to identify clusters is a novel approach to clustering, not much is known yet about the quality of results or information attained by this method. While Klepper et al. claim good results with this framework, these have not yet been reproduced. In this thesis, we implement this framework, review their guarantees and try to reproduce and expand upon the results Klepper et al. attained on synthetic and real data.
  • 14.08.2023
    16:00 - 17:00
     
    Niklas Fücker: Graph Neural Networks as Global Search Heuristics for Weighted and Partial Constraint Satisfaction
  • 01.08.2023
    10:00 - 11:00
    MSc
    Applicability domain and uncertainty quantification methods for molecular property prediction
    Jamshaid Sohail
    This thesis uses neural networks to predict molecular properties on benchmark datasets from moleculenet. The thesis also explores various uncertainty quantification techniques to assess the uncertainty of these molecular property predictions. The effectiveness of different uncertainty quantification techniques is evaluated using uncertainty evaluation methods, and the findings are summarized in the conclusion. Furthermore, the thesis comprehensively explains graph neural networks used for molecular property predictions. It includes a chapter on uncertainty quantification techniques for neural networks and molecular property predictions. Molecular property prediction has utilized single-task and multi-task learning, enabling the simultaneous prediction of one or multiple molecular properties. Additionally, uncertainty quantification methods have been implemented and compared. Multi-task learning techniques have been explained, and the experiments and results have been presented along with graphs and plots.
  • 01.08.2023
    10:00 - 11:00
    MSc
    Applicability domain and uncertainty quantification methods for molecular property prediction
    Jamshaid Sohail
    This thesis uses neural networks to predict molecular properties on benchmark datasets from moleculenet. The thesis also explores various uncertainty quantification techniques to assess the uncertainty of these molecular property predictions. The effectiveness of different uncertainty quantification techniques is evaluated using uncertainty evaluation methods, and the findings are summarized in the conclusion. Furthermore, the thesis comprehensively explains graph neural networks used for molecular property predictions. It includes a chapter on uncertainty quantification techniques for neural networks and molecular property predictions. Molecular property prediction has utilized single-task and multi-task learning, enabling the simultaneous prediction of one or multiple molecular properties. Additionally, uncertainty quantification methods have been implemented and compared. Multi-task learning techniques have been explained, and the experiments and results have been presented along with graphs and plots.
  • 06.07.2023
    10:00 - 10:45
    MSc
    Boolean Satisfiability Near the Satisfiability Threshold
    Sophie Hallstedt
    [master]Boolean satisfiability (SAT) is a central problem in
    theoretical computer
    science, but also many practical problems can be mapped to SAT
    instances. For practical applications, the average complexity of SAT
    solvers is more relevant than the worst-case complexity. One way to
    estimate an algorithm's average complexity is to evaluate its performance
    on random SAT instances drawn from the random k-SAT distribution.

    The SAT threshold is the clause density where the probability of a
    generated formula being satisfiable quickly falls from 1 to 0. Its
    existence was long conjectured but has only recently been proven
    rigorously. Additionally, it is harder for all known algorithms to
    determine satisfiability in the region around the SAT threshold.

    In this talk, I investigate the behaviour of GNN-based methods in this hard
    region of random k-SAT. I evaluate ANYCSP experimentally and compare its
    performance to that of other existing algorithms. I further the theoretical
    understanding of GNN-based methods by introducing factor graph neural
    networks (FGNNs), a general GNN-framework for SAT. I present upper and
    lower bounds of the algorithmic threshold for different classes of FGNN,
    showing that they perform at least as well as the algorithm class of
    low-degree polynomials.

  • 16.06.2023
    10:30 - 11:30
    ATI
    Fine-grained Expressivity of Graph Neural Networks
    Jan Böker
    [ati][speaker=https://www.lics.rwth-aachen.de/cms/LICS/Der-Lehrstuhl/Team/Wissenschaftliche-Mitarbeiterinnen-und-M/~rrqo/JanBoeker/]Numerous
    recent works have analyzed the expressive power of message-passing
    graph neural networks (MPNNs), primarily utilizing combinatorial
    techniques such as the 1-dimensional Weisfeiler-Leman test (1-WL) for
    the graph isomorphism problem. However, the graph isomorphism objective
    is inherently binary, not giving insights into the degree of similarity
    between two given graphs. We resolve this issue by considering
    continuous extensions of both 1-WL and MPNNs to graphons. Concretely, we
    show that the continuous variant of 1-WL delivers an accurate
    topological characterization of the expressive power of MPNNs on
    graphons, revealing which graphs these networks can distinguish and the
    level of difficulty in separating them. We identify the finest topology
    where MPNNs separate points and prove a universal approximation theorem.
    Consequently, we provide a theoretical framework for graph and graphon
    similarity combining various topological variants of classical
    characterizations of the 1-WL. In particular, we characterize the
    expressive power of MPNNs in terms of the tree distance, which is a
    graph distance based on the concepts of fractional isomorphisms, and
    substructure counts via tree homomorphisms, showing that these concepts
    have the same expressive power as the 1-WL and MPNNs on graphons.
    Empirically, we validate our theoretical findings by showing that
    randomly initialized MPNNs, without training, exhibit competitive
    performance compared to their trained counterparts. Moreover, we
    evaluate different MPNN architectures based on their ability to preserve
    graph distances, highlighting the significance of our continuous 1-WL
    test in understanding MPNNs' expressivity.

    This is joint work with Ron Levie, Ningyuan Huang, Soledad Villar, and
    Christopher Morris.
  • 06.06.2023
    12:00 - 12:45
     
    Master Colloquium Luisa Erhard
  • 26.05.2023
    10:30 - 11:30
    ATI
    Simulating Logspace-Recursion with Logarithmic Quantifier Depth
    Luca Oeljeklaus
    The fixed-point logic LREC= was developed by Grohe et al. (CSL 2011) in the quest for a logic to capture all problems decidable in logarithmic space. It extends FO+C, first-order logic with counting, by an operator that formalises a limited form of recursion. We show that for every LREC=-definable property on relational structures, there is a constant k such that the k-variable fragment of first-order logic with counting quantifiers expresses the property via formulae of logarithmic quantifier depth. This yields that any pair of graphs separable by the property can be distinguished with the k-dimensional Weisfeiler-Leman algorithm in a logarithmic number of iterations. In particular, it implies that the algorithm identifies every interval graph and every chordal claw-free graph in logarithmically many iterations.
  • 12.05.2023
    10:30 - 11:30
    ATI
    Learning algorithms for ω-automata
    León Bohn
    This talk is about the identification of ω-automata from sets of positive and negative ultimately periodic example words. We’ll recap the core mechanisms underlying learning algorithms for deterministic finite automata, then illustrate the difficulties one faces in extending these approaches to deterministic ω-automata and finally, we’ll discuss a polynomial time algorithm for constructing a deterministic parity automata (DPA) from a given set of positive and negative ultimately periodic example words. This is joint work with Christof Löding.
  • 02.05.2023
    15:00 - 15:30
    BSc
    Cops and Robber Games for Tree Depth and Cycle Rank
    Beren Güzin Malatyali
    The tree-depth td(G) of a connected graph G is defined as the minimum height of a rooted tree T whose closure contains G as a subgraph. Tree-depth is also known by other names, such as the vertex ranking number, the minimum number of a centered coloring, and also the minimum height of an elimination tree for a graph. The cycle-rank cr(G) of a graph G  is a parameter relating digraph complexity to other fields, such as regular language complexity and asymmetric matrix factorization. It is considered an extension of tree-depth to digraphs, as they are closely related. In this thesis, we consider variants of the cops-and-robber game that are known to characterize the tree-depth and cycle-rank of a graph. We formally present the variants under consideration and show their equivalence to tree-depth and cycle-rank, respectively. Further, we examine structural obstructions that give tight min-max characterizations for tree-depth and cycle-rank.
  • 28.04.2023
    08:30 - 09:00
    BSc
    The Parameterized Complexity of Finding Subgraphs
    Ahmed Othman
    Subgraph isomorphism, the problem of finding one graph to be a subgraph in another, is a fundamental challenge in computer science and mathematics with wide-ranging applications. In this presentation, we explore the fascinating world of parameterized complexity of subgraph isomorphism and examine several classes of graphs such as grids, walls, complete bipartite graphs, and planar graphs. We uncover the difficulty of solving subgraph isomorphism for grids and walls and show the high computational complexity of the problem. However, we also show that there are cases where the problem can be solved efficiently, in particular for graphs with bounded treewidth. We introduce the concept of planar graph patterns and the powerful technique of color coding and provide insights into the computability of subgraph isomorphism under different graph classes and structural constraints.
  • 26.04.2023
    11:00 - 12:00
    BSc
    How Best to Approximate Treedepth
    Ziyong Cheong
    [bachelor] Treedepth is a graph invariant that plays an important role in matrix
    factorisation, communications monitoring, scheduling, graph homomorphisms, and
    even logic. Intuitively, treedepth measures the 'depth' of a graph, allowing
    one to gauge the performance of a divide-and-conquer algorithm on a graph by
    first computing its treedepth.

    Sadly, computing the treedepth of a graph is computationally hard. However,
    there exists a simple polynomial-time approximation algorithm using treewidth,
    treedepth's more famous relative. In addition, there are also extremely
    performant heuristic algorithms for the treedepth problem. In this presentation,
    I will present both the simple approximation algorithm as well as the heuristic
    algorithm ExTREEm, which won the heuristic track of the PACE 2020 Treedepth
    challenge.
  • 18.04.2023
    10:30 - 11:30
    BSc
    Complexity and Approximability of Graph Mismatch Norms
    Tjaden Schoell
    Measuring the similarity of two graphs is an important algorithmic problem. As it can be seen as an optimisation version of graph isomorphism it is natural to formulate the problem as finding an alignment between two graphs, such that the cost of a however defined difference of the graphs is minimal. We introduce mismatch graphs as a formalised form of difference of graphs and measure their cost using mismatch norms. Matrix norms are very  natural candidates for mismatch norms and we study the behaviour of the entrywise, operator, Schatten and Ky Fan norms as well as the cut norm. We give an overview of the existing results, analyse the singular values  of various graph classes, show NP-hardness of large chunks of the Schatten norm even on restricted classes of graphs and improve the inapproximability result regarding the multiplicative error for the operator norms.
  • 18.04.2023
    12:00 - 12:45
    BSc
    Expressivity and Complexity of Arc-Colour Refinement
    Emre Irhan
    The Weisfeiler-Lehman(WL) algorithm is an essential procedure for determining certain relational structures, most significantly used as a subroutine for the Graph Isomorphism Problem. The basic idea of Weisfeiler-Lehman algorithm is to create partitions of vertex k−tuples according to their number of neighbors from the other partitions. As k grows, the algorithm becomes more powerful in the context of expressivity, meaning it can tell if two graphs are isomorphic or not more accurately and for more graphs. But as the size of the tuples grows, so does the time that it takes to run the algorithm. We investigate a variant of Weisfeiler-Lehman algorithm for 2-tuples; instead of refining the partitions of all vertex 2-tuples, we only take the edges and their neighboring edges while creating and refining the partitions. This procedure is called arc − colour − refinement. We offer a variant of Ehrenfeucht-Fraïssé games that characterizes arc-colour refinement. Afterwards, we analyze the actual implementation we have in Python version 3.11. The results suggest that 1-WL and arc-colour refinement could be equally powerful.
  • 10.03.2023
    11:00 - 12:00
     
    Informatik Oberseminar: Descriptive Complexity of Learning
    Steffen van Bergerem
    Ort:   Raum 9222,Gebäude E3, Ahornstr. 55

    Zoom:

    https://rwth.zoom.us/j/95679484581?pwd=Q0N4SUFZQ21jVERZMDAwc2cyQXpzZz09

     

    Referent: Steffen van Bergerem, M.Sc.

              Lehrstuhl für Informatik 7

     

    Thema: Descriptive Complexity of Learning

     

    Abstract:

     

    Supervised learning is a field in machine learning thatstrives to classify data based on labelled training examples. In the Booleansetting, each input is to be assigned to one of two classes, and there areseveral fruitful machine-learning methods to obtain a classifier.

    However, different algorithms usually come with differenttypes of classifiers, e.g. decision trees, support-vector machines, or neuralnetworks, and this is cumbersome for a unified study of the intrinsiccomplexity of learning tasks.

     

    This thesis aims at strengthening the theoreticalfoundations of machine learning in a consistent framework. In the setting dueto Grohe and Turán (2004), the inputs for the classification are tuples from arelational structure and the search space for the classifiers consists oflogical formulas. The framework separates the definition of the class ofpotential classifiers (the hypothesis class) from the precise machine-learningalgorithm that returns a classifier. This facilitates an information-theoreticanalysis of hypothesis classes as well as a study of the computationalcomplexity of learning hypotheses from a specific hypothesis class.

     

    As a first step, Grohe and Ritzert (2017) proved thathypotheses definable in first-order logic (FO) can be learned in sublinear timeover structures of small degree. We generalise this result to two extensions ofFO that provide data-aggregation methods similar to those in commonly usedrelational database systems. First, we study the extension FOCN of FO withcounting quantifiers. Then, we analyse logics that operate on weightedstructures, which can model relational databases with numerical values. Forthat, we introduce the new logic FOWA, which extends FO by weight aggregation.We provide locality results and prove that hypotheses definable in a fragmentof the logic can be learned in sublinear time over structures of small degree.

     

    To better understand the complexity of machine-learningtasks on richer classes of structures, we then study the parameterisedcomplexity of these problems. On arbitrary relational structures and undercommon complexity-theoretic assumptions, learning hypotheses definable in purefirst-order logic turns out to be intractable.

    In contrast to this, we show that the problem isfixed-parameter tractable if the structures come from a nowhere dense class.

    This subsumes numerous classes of sparse graphs. Inparticular, we obtain fixed-parameter tractability for planar graphs, graphs ofbounded treewidth, and classes of graphs excluding a minor.

     

     

    Es laden ein: die Dozentinnen und Dozenten der Informatik_______________________________________________

    informatik-sekretariate Mailingliste -- informatik-sekretariate@lists.rwth-aachen.de

    Um den Newsletter abzubestellen, senden Sie bitte eineE-Mail an informatik-sekretariate-leave@lists.rwth-aachen.de

  • 09.03.2023
    11:00 - 12:00
     
    Advances in algorithmic meta-theorems
    Sebastian Siebertz

    Abstract: 

    Algorithmicmeta-theorems provide general explanations when and why certain algorithmicproblems can be solved efficiently. They are typically formulated in terms oflogic (defining the descriptive complexity of the problems) and structuralproperties of their inputs. A prototypical algorithmic meta-theorem isCourcelle’s Theorem, stating that every graph property definable in monadicsecond-order logic (MSO) can be decided in linear time on every graph class ofbounded treewidth. Similarly, every graph property definable in first-orderlogic (FO) can be tested efficiently on every nowhere dense graph class. 

     In this talkI will present recent progress on algorithmic meta-theorems for FO on densegraph classes as well as for logics whose expressive power lies between MSO andFO. The presented results reveal beautiful connections between structural graphtheory, classical model theory and algorithmics. 

  • 06.03.2023
    13:00 - 14:00
    BSc
    The Complexity of Modular Counting of Graph Homomorphisms
    Stanislav Mironenko
    [bachelor]
  • 06.03.2023
    14:00 - 15:00
     
    Treelike decompositions for transductions of sparse graphs
    Sandra Kiefer

    We give new decomposition theorems for classes of graphsthat can be transduced in first-order logic from classes of sparse graphs —more precisely, from classes of bounded expansion and from nowhere denseclasses. In both cases, the decomposition takes the form of a single coloredrooted tree of bounded depth where, in addition, there can be links betweennodes that are not related in the tree. The constraint is that the structureformed by the tree and the links has to be sparse. Using the decomposition theoremfor transductions of nowhere dense classes, we show that they admitlow-shrubdepth covers of size O(nε), where n is the vertex count and ε>0 isany fixed real. This solves an open problem posed by Gajarský et al. (ACM TOCL'20) and also by Briański et al. (SIDMA '21).

     

    This is joint work with Jan Dreier, Jakub Gajarský, Michał Pilipczuk, and Szymon Toruńczyk.

  • 13.02.2023
    11:00 - 12:00
    BSc
    Quantum Graph Isomorphism
    Dennis Sibirkin
    [bachelor] We present four different graph isomorphism notions; classical, quantum, quantum com-
    muting and non-signalling graph isomorphisms. We define them based on the existence of
    perfect classical/quantum/non-signalling strategies in a synchronous nonlocal game, called
    the $(G,H)$-isomorphism game. For the quantum strategies, we consider the tensor product
    framework and the commuting framework of quantum mechanics. We summarize different
    characterizations of these graph isomorphisms; in particular, we present a purely algebraic
    characterization of each isomorphism notion and relate each of them to Integer programming.
    We will see that two graphs are non-signalling isomorphic if and only if they are fractionally
    isomorphic, and explain the result that two graphs are quantum commuting isomorphic if
    and only if they admit the same number of homomorphisms from each planar graph. In order
    to separate the isomorphism notions from each other, we show a reduction from linear binary
    constraint system games to classical and quantum (commuting) isomorphisms. Incidentally,
    we get determining whether two graphs are quantum (commuting) isomorphic is undecidable.
  • 03.02.2023
    08:30 - 09:30
    MSc
    Substructure Counting for Molecular Property Prediction with Neural Networks
    Attila Lischka
    [master] To predict molecular properties, functional group contribution models have been successfully used in the past. Simultaneously, graph neural networks (GNNs) have emerged as a tool operating on graph structured data, achieving state of the art performance in many graph related tasks. Inspired by these two architectures, in this thesis, we propose a new feature vector initialization for GNNs operating on molecular data that encodes information about functional groups in the underlying molecules. We show that these GNNs are theoretically more powerful and can distinguish molecules that cannot be dierentiated by functional group contribution models. At the same time, we show that functional group encodings make GNNs moreexpressive compared to GNNs without these encodings. Afterwards, we underline our findings empirically by conducting experiments on several molecular datasets. By this, we show that GNNs using functional group encodings perform noticeably better than GNNs without these encodings and, depending on the dataset size, also better than functional group contribution models.
  • 03.02.2023
    08:30 - 09:30
    MSc
    Substructure Counting for Molecular Property Prediction with Neural Networks
    Attila Lischka
    [master] To predict molecular properties, functional group contribution models have been successfully used in the past. Simultaneously, graph neural networks (GNNs) have emerged as a tool operating on graph structured data, achieving state of the art performance in many graph related tasks. Inspired by these two architectures, in this thesis, we propose a new feature vector initialization for GNNs operating on molecular data that encodes information about functional groups in the underlying molecules. We show that these GNNs are theoretically more powerful and can distinguish molecules that cannot be dierentiated by functional group contribution models. At the same time, we show that functional group encodings make GNNs moreexpressive compared to GNNs without these encodings. Afterwards, we underline our findings empirically by conducting experiments on several molecular datasets. By this, we show that GNNs using functional group encodings perform noticeably better than GNNs without these encodings and, depending on the dataset size, also better than functional group contribution models.
  • 27.01.2023
    10:30 - 11:30
    ATI
    Graph Similarity Based on Matrix Norms
    Timo Gervens
    Quantifying the similarity between two graphs is a fundamental algorithmic problem at the heart of many data analysis tasks for graph-based data. We study the computational complexity of a family of similarity measures based on quantifying the mismatch between the two graphs, that is, the "symmetric difference" of the graphs under an optimal alignment of the vertices. An important example is similarity based on graph edit distance. While edit distance calculates the "global" mismatch, that is, the number of edges in the symmetric difference, our main focus is on "local" measures calculating the maximum mismatch per vertex. Mathematically, our similarity measures are best expressed in terms of the adjacency matrices: the mismatch between graphs is expressed as the difference of their adjacency matrices (under an optimal alignment), and we measure it by applying some matrix norm. Roughly speaking, global measures like graph edit distance correspond to entrywise matrix norms like the Frobenius norm and local measures correspond to operator norms like the spectral norm. We prove a number of strong NP-hardness and inapproximability results even for very restricted graph classes such as bounded-degree trees.
  • 16.12.2022
    11:30 - 12:15
    BSc
    Shapley Value Explanations for Graph Invariants
    Jens Breitung

    [bachelor]  The Shapley value is an attribution function originating in cooperative game 
    theory that concerns itself with distributing wealth “fairly” among the players 
    of a cooperative game. Shapley values have been studied in many different 
    fields, but so far there exists no framework for graphs.
    We introduce the Shapley value for graphs where the cooperative games are 
    defined by integer graph invariants. Since graphs provide a lot of natural struc-
    ture we study how this structure can be exploited to improve the computation 
    of Shapley values and how their computational complexity relates to that of the 
    graph invariants themselves.
    While counting problems are at least as hard as their decision variant, we give 
    evidence that in terms of Shapley value considerations, this relation does not 
    necessarily hold.
  • 02.12.2022
    10:30 - 11:30
    ATI
    Approximate Probabilistic Query Evaluation
    Julia Feith
    Probabilistic databases model uncertain data as a probability space over traditional relational databases. Evaluating queries in probabilistic databases hence requires computing confidences for the answers, which is #P-hard even for (some) conjunctive queries. Therefore, new methods are needed to obtain practical solutions.

    We survey the computational complexity of approximately evaluating queries. In particular, we study the data complexity of queries with regard to additive and multiplicative approximation. We adapt algorithms and techniques for proving hardness-of-approximation results from related fields to show that these approximation paradigms give rise to a strict hierarchy under common complexity-theoretic assumptions.

    This is work in progress joint with Peter Lindner, Christoph Standke and Martin Grohe
  • 02.12.2022
    10:30 - 11:30
    ATI
    Approximate Probabilistic Query Evaluation
    Julia Feith
    Probabilistic databases model uncertain data as a probability space over traditional relational databases. Evaluating queries in probabilistic databases hence requires computing confidences for the answers, which is #P-hard even for (some) conjunctive queries. Therefore, new methods are needed to obtain practical solutions.

    We survey the computational complexity of approximately evaluating queries. In particular, we study the data complexity of queries with regard to additive and multiplicative approximation. We adapt algorithms and techniques for proving hardness-of-approximation results from related fields to show that these approximation paradigms give rise to a strict hierarchy under common complexity-theoretic assumptions.

    This is work in progress joint with Peter Lindner, Christoph Standke and Martin Grohe
  • 25.11.2022
    10:30 - 11:30
    ATI
    Some Might Say Sum Is All You Need
    Eran Rosenbluth
    Graph Neural Networks (GNNs) are a class of algorithms for processing graphs, used in learning tasks on graphs. A GNN is defined as a series of computations on a vertex’s state, interleaved with message passes between the vertex and its neighbors. The computations and message passes procedures are identical for each vertex in any graph – defined once and applied everywhere, making GNNs isomorphism-invariant and inherently scalable algorithms. Each computation (in the series of computations) starts with an aggregation of the vertex’s neighbors’ current state. Specific GNN architectures may differ in various properties, one of which is the aggregation functions they use. A key characteristic of a learning framework is the expressivity of its underlying class of algorithms. There have been several works concerning the effect of the specific aggregation function a GNN architecture uses and that architecture’s expressivity. Although not proving it, some of these works strengthened a belief that Sum aggregation is generally superior (expressivity-wise) to other popular aggregations - Mean and Max. We set out to examine that idea. WORK IN PROGRESS . Joint work with Martin Grohe and Jan Toenshoff
  • 18.11.2022
    10:30 - 11:30
    ATI
    Reconstructing Graphs from Homomorphism Counts
    Tim Seppelt
    Counting
    homomorphism F_i → G from a family of graphs F_1, dots, F_n
    has emerged as a promising strategy for extracting properties of a graph
    G from a theoretical point of view as well as in applications. In
    practice, such homomorphism counts yield a vector embedding of graphs,
    which paves the way to using standard machine learning techniques on
    graph data. In this talk, the converse question of reconstructing a
    graph G from such homomorphism counts is addressed. This problem arises
    when attempting to interpret predictions made by machine learning
    machineries on vector-embedded graphs. We study the complexity of
    reconstruction from homomorphism counts, giving both positive and
    negative results.



    This is work in progress joint with Jan Böker and Christoph Standke.
  • 14.11.2022
    10:00 - 10:30
    BSc
    Graph Clustering with GNNs
    Soysal Dogukan
    [bachelor]Graph Neural Networks with message-passing networks
    perform outstandingly on graph analysis tasks by leveraging higher-order
    structural information. The research trend in recent years has given
    birth to various GNN architectures. In this thesis, we present and
    empirically evaluate a recently proposed GNN-based unsupervised
    attributed clustering approach Deep Modularity Networks with carefully
    designed experiments. We experiment with different models with several
    real-world datasets and evaluate them based on the ground-truth label
    correlation and graph clustering quality. The robustness of each model
    is further tested with synthetic graphs with different signal-to-noise
    scenarios. We then analyze the outstanding performance of DMoN across
    different metrics compared to four baselines.
  • 14.11.2022
    16:00 - 17:00
     
    Querying Incomplete Numerical Data: Between Certain and Possible Answers
    Liat Peterfreund
    Queries with aggregation and arithmetic operations, as well as incomplete data, are common in real-world databases, but we lack a good understanding of how they should interact. On the one hand, systems based on SQL provide ad-hoc rules for numerical nulls, on the other, theoretical research largely concentrates on the standard notions of certain and possible answers which In the presence of numerical attributes and aggregates are often meaningless. In this work, we define a principled compositional framework for databases with numerical nulls and answering queries with arithmetic and aggregations over them. We assume that missing values are given by probability distributions associated with marked nulls, which yields a model of probabilistic bag databases. We concentrate on queries that resemble standard SQL with arithmetic and aggregation and show that they are measurable, and that their outputs have a finite representation. Moreover, since the classical forms of answers provide little information in the numerical setting, we look at the probability that numerical values in output tuples belong to specific intervals. Even though their exact computation is intractable, we show efficient approximation algorithms to compute such probabilities.

    The talk is based on joint work with Marco Console and Leonid Libkin, and will be presented in PODS 2023.
  • 11.11.2022
    10:30 - 11:30
    ATI
    Graph Homomorphisms and Sampling and the Prokhorov Metric
    Jan Böker
    [ati][speaker=href=href="https://www.lics.rwth-aachen.de/cms/LICS/Der-Lehrstuhl/Team/Wissenschaftliche-Mitarbeiterinnen-und-M/~rrqo/JanBoeker/">"https://www.lics.rwth-aachen.de/cms/LICS/Der-Lehrstuhl/Team/Wissenschaftliche-Mitarbeiterinnen-und-M/~rrqo/JanBoeker/"> class="moz-txt-link-freetext"
    href="https://www.lics.rwth-aachen.de/cms/LICS/Der-Lehrstuhl/Team/Wissenschaftliche-Mitarbeiterinnen-und-M/~rrqo/JanBoeker/">https://www.lics.rwth-aachen.de/cms/LICS/Der-Lehrstuhl/Team/Wissenschaftliche-Mitarbeiterinnen-und-M/~rrqo/JanBoeker/]The
    tree distance, a variant of the well-known cut distance from the theory
    of graph limits, is a pseudometric on graphs that can be characterized
    in
    terms of tree homomorphism densities (B. 2021). However, existing
    results
    are non-quantitative in nature and do not give an understanding
    of
    why two graphs are similar in the tree distance if their tree
    homomorphism
    densities are close. We show how tree homomorphism
    densities are
    closely related to taking samples of the colors the
    color-refinement
    algorithm assigns to the vertices of a graph. To
    measure the
    similarity of such colors in a graph, we use the Prokhorov metric to
    define
    a pseudometric on graphs that is similar to one proposed by Chen et al.

    (2022). Our pseudometric is computable in polynomial time and
    induces
    the same topology as both the tree distance and tree
    homomorphism
    densities and, hence, can arguably be seen as the
    right way to
    turn the color-refinement algorithm into a distance
    measure on
    graphs.
  • 11.11.2022
    12:00 - 12:30
    BSc
    Answering Queries from Substructure Counts
    Julius Tischbein
    [bachelor]
  • 04.11.2022
    10:30 - 11:30
    ATI
    Learning algorithms for ω-automata
    Leon Bohn
    [ati][speaker=href="https://www.lics.rwth-aachen.de/cms/LICS/Der-Lehrstuhl/Team/Wissenschaftliche-Mitarbeiterinnen-und-M/~ldsbd/Leon-Bohn-alt/">https://www.lics.rwth-aachen.de/cms/LICS/Der-Lehrstuhl/Team/Wissenschaftliche-Mitarbeiterinnen-und-M/~ldsbd/Leon-Bohn-alt/]The
    RPNI algorithm (Oncina, Garcia 1992) constructs deterministic finite
    automata (DFA) from finite sets of negative and positive example words.
    In this talk, we propose an extension of this algorithm to infinite
    words, which can learn all deterministic ω
    -automata
    with an informative right congruence in the limit with polynomial time
    and data.
    Further, we introduce a second learner that is complete for
    the full class of deterministic Büchi automata (DBA). This learner
    constructs multiple DFAs using
    a polynomial-time active learning
    algorithm on finite words as black box using an oracle that we implement
    based on the given sample of ω-words, and combines these DFAs into a
    single DBA.
  • 04.11.2022
    10:30 - 11:30
    ATI
    Learning algorithms for ω-automata
    Leon Bohn
    [ati][speaker=href="https://www.lics.rwth-aachen.de/cms/LICS/Der-Lehrstuhl/Team/Wissenschaftliche-Mitarbeiterinnen-und-M/~ldsbd/Leon-Bohn-alt/">https://www.lics.rwth-aachen.de/cms/LICS/Der-Lehrstuhl/Team/Wissenschaftliche-Mitarbeiterinnen-und-M/~ldsbd/Leon-Bohn-alt/]The
    RPNI algorithm (Oncina, Garcia 1992) constructs deterministic finite
    automata (DFA) from finite sets of negative and positive example words.
    In this talk, we propose an extension of this algorithm to infinite
    words, which can learn all deterministic ω
    -automata
    with an informative right congruence in the limit with polynomial time
    and data.
    Further, we introduce a second learner that is complete for
    the full class of deterministic Büchi automata (DBA). This learner
    constructs multiple DFAs using
    a polynomial-time active learning
    algorithm on finite words as black box using an oracle that we implement
    based on the given sample of ω-words, and combines these DFAs into a
    single DBA.
  • 21.10.2022
    10:30 - 11:30
    ATI
    Analysis of multi-stage flow shop processing systems
    Robin Westermann
    We introduce and investigate a simple flow shop model with equivalent jobs passing through a given sequence of stations. The model is characterized by the existence of constraints on the maximum number of jobs on specified sets of stations. Some elementary observations serve to illustrate its dynamics. Our main result is the NP-hardness of three central problems: reachability, absence of deadlocks, and determining the optimal throughput for a variant of the model.
  • 14.10.2022
    10:30 - 11:30
    ATI
    Graph Neural Networks as Fast Global Search Heuristics for Constraint Satisfaction
    Jan Tönshoff
    [ati][speaker=https://www.lics.rwth-aachen.de/cms/LICS/Der-Lehrstuhl/Team/Wissenschaftliche-Mitarbeiterinnen-und-M/~hhxya/Jan-Toenshoff/]We
    propose a universal Graph Neural Network architecture which can be
    trained as an end-2-end search heuristic for any Constraint Satisfaction
    Problem (CSP). Our architecture can be trained unsupervised with policy
    gradient descent to generate problem specific heuristics for any CSP in
    a purely data driven manner. The approach is based on a novel graph
    representation for CSPs that is both generic and compact and enables us
    to process every possible CSP instance with one GNN, regardless of
    constraint arity, relations or domain size. Unlike previous RL-based
    methods, we operate on a global search action space and allow our GNN to
    modify any number of variables in every step of the stochastic search.
    This enables our method to properly leverage the inherent parallelism of
    GNNs. We perform a thorough empirical evaluation where we learn
    heuristics for well known and important CSPs from random data, including
    graph coloring, MaxCut, 3-SAT and MAX-k-SAT. Our approach outperforms
    prior approaches for neural combinatorial optimization by a substantial
    margin. It can compete with, and even improve upon, conventional search
    heuristics on test instances that are several orders of magnitude larger
    and structurally more complex than those seen during training.
  • 10.10.2022
    15:00 - 15:30
    BSc
    Motif Counting with Color Coding
    Jan Reinhard
    [bachelor]A motif is a pattern that occurs more often inside a
    graph than it should.  Discovering and counting motifs has application
    in bioinformatics is  computationally hard. We explore the theoretical
    background of the underlying  problem of subgraph isomorphy. Then we
    discuss the color coding method by Alon  et al. and adapt it to count
    trees and more general graphs of bounded  treewidths. These algorithms
    are analysed, implemented and and evaluated. We  find that color coding
    is a useful method to approximately count subgraphs,  especially for
    graphs of size k ≥ 9.
  • 28.09.2022
    11:00 - 11:30
    BSc
    Algorithms for Computing Treedepth
    Amelie Dittmann

    [bachelor]  
    Treedepth is a graph invariant that plays an important role in the structure
    theory of sparse graphs. As treedepth is NP-hard, there is no polynomial-time
    algorithm unless P=NP. Thus there are parameterized algorithms as well as
    approximation algorithms. We discuss the current common-state-of-the-art
    exact algorithm for solving treedepth. This is a functional parameterized
    algorithm that runs linearly in the number of vertices and exponential in the
    treedepth of the graph.
    The Parametrized Algorithms and Computational Experiments (PACE)
    challenge 2020 was a competition for implementations of treedepth solvers.
    While the challenge evaluated the number of solved instances, we investigate the
    functionality of the algorithms in more detail. The key results of the submitted
    algorithms are special pre-processing and pruning techniques. For the analysis
    we focus on the best solvers submitted to the PACE challenge 2020 and examine
    their performance on selected graphs. Finally, we work out the best ways of
    pre-processing and pruning used by the algorithms.
  • 28.09.2022
    11:00 - 11:30
    BSc
    Algorithms for Computing Treedepth
    Amelie Dittmann

    [bachelor]  
    Treedepth is a graph invariant that plays an important role in the structure
    theory of sparse graphs. As treedepth is NP-hard, there is no polynomial-time
    algorithm unless P=NP. Thus there are parameterized algorithms as well as
    approximation algorithms. We discuss the current common-state-of-the-art
    exact algorithm for solving treedepth. This is a functional parameterized
    algorithm that runs linearly in the number of vertices and exponential in the
    treedepth of the graph.
    The Parametrized Algorithms and Computational Experiments (PACE)
    challenge 2020 was a competition for implementations of treedepth solvers.
    While the challenge evaluated the number of solved instances, we investigate the
    functionality of the algorithms in more detail. The key results of the submitted
    algorithms are special pre-processing and pruning techniques. For the analysis
    we focus on the best solvers submitted to the PACE challenge 2020 and examine
    their performance on selected graphs. Finally, we work out the best ways of
    pre-processing and pruning used by the algorithms.
  • 01.09.2022
    15:00 - 16:00
    BSc
    Arithmetic with Bounded Depth Threshold Circuits
    Elena Küchle
    [bachelor]Feed-forward neural networks (FNNs) have gained considerable attention in recent years due to their success in dealing with a number of machine learning problems. One way to model such networks is by means of threshold (LT) circuits. Threshold circuits are a well-studied extension of the traditional model of boolean circuits. However, little to no research has been conducted on the arithmetic of rational numbers using threshold circuits. First, we assess the state of the art knowledge of the basic arithmetic functions on integers with bounded depth LT circuits. In addition, we show that these results also hold for the arithmetic on rationals. Lastly, we approximate non-rational functions, such as the exponential and the natural logarithm functions, using LT circuits.
  • 01.09.2022
    15:00 - 16:00
    BSc
    Arithmetic with Bounded Depth Threshold Circuits
    Elena Küchle
    [bachelor]Feed-forward neural networks (FNNs) have gained considerable attention in recent years due to their success in dealing with a number of machine learning problems. One way to model such networks is by means of threshold (LT) circuits. Threshold circuits are a well-studied extension of the traditional model of boolean circuits. However, little to no research has been conducted on the arithmetic of rational numbers using threshold circuits. First, we assess the state of the art knowledge of the basic arithmetic functions on integers with bounded depth LT circuits. In addition, we show that these results also hold for the arithmetic on rationals. Lastly, we approximate non-rational functions, such as the exponential and the natural logarithm functions, using LT circuits.